Showing posts with label EDA INDUSTRY VERIFICATION. Show all posts
Showing posts with label EDA INDUSTRY VERIFICATION. Show all posts

Friday, February 22, 2019

Quick chat with Tom Fitzpatrick : recipient of Accellera Technical Excellence Award


Moore’s law, the driving force behind the evolution of the semiconductor industry, talks about the outcome wherein the complexity on silicon doubles every 2 years. However, to achieve this outcome for past 50+ years, several enablers had to evolve at the same or much faster pace. Verification as a practice, unfolded, as part of this journey and entered mainstream, maturing with every new technology node. To turn the wheel around everytime, various spokes in the form of languages, EDA tools, flows, methodologies, formats, & platforms get introduced. This requires countless hours of contribution from individuals representing diverse set of organizations & cultures putting the canvas together for us to paint the picture. 

Tom Fitzpatrick
At DVCon, Accellera recognizes outstanding achievement of an individual and his/her contributions to the development of Standards. This year, Tom Fitzpatrick, Vice Chair of the Portable Stimulus Working Group and member of the UVM Working Group, is the recipient of the 8th annual Accellera Technical Excellence Award.  Tom who represents Mentor at Accellera, has more than 3 decades of rich experience in this industry. In a quick chat with us, he shares his journey as a verification engineer, technologist & evangelist!!!    

Many congratulations Tom! Tell us about yourself & how did you start into verification domain?

Thanks! I started my career as a chip designer at Digital Equipment Corporation after graduating from MIT. During my time there, I was the stereotypical “design engineer doing verification,” and learned a fair amount about the EDA tools, including developing rather strong opinions about what tools ought to be able to do and not do. After a brief stint at a startup, I worked for a while as a verification consultant and then moved into EDA at Cadence. It was in working on the rollout of NC-Verilog that I really internalized the idea that verification productivity is not the same thing as simulator performance. That idea is what has really driven me over the years in trying to come up with new ways to make the task of verification more efficient and comprehensive.

Great! You have witnessed verification evolving over decades. How has been your experience on this journey?

I’m really fortunate to have “grown up” with the industry over the years, going from schematics and vectors to where we are now. I had the good fortune to do my Master’s thesis while working at Tektronix, being mentored by perhaps the most brilliant engineer I have ever known. I remember the board he was working on at the time, which had both TTL and ECL components, multiple clock domains, including a voltage-controlled oscillator and phase-locked loop, and he got the whole thing running on the first pass doing all of the “simulation” and timing analysis by hand on paper. That taught me that even as we’ve moved up in abstraction in both hardware and verification, if you lose sight of what the system is actually going to do, no amount of debug or fancy programming is going to help you.

For me, personally, I think the biggest evolution in my career was joining Co-Design Automation and being part of the team that developed SUPERLOG, the language that eventually became System Verilog. Not only did I learn a tremendous amount from luminaries like Phil Moorby and Peter Flake, but the company really gave me the opportunity to become an industry evangelist for leading-edge verification. That led to working on VMM with Janick Bergeron at Synopsys and then becoming one of the original developers of AVM and later OVM and UVM at Mentor. From there I’ve moved on to Portable Test and Stimulus as well.

So, what according to you were key changes that have impacted verification domain the most?

I think there were several. The biggest change was probably the introduction of constrained-random stimulus and functional coverage in tools like Specman and Vera. Combined with concepts like object-oriented programming, these really brought verification into the software domain where you could model things like the user accidentally pressing multiple buttons simultaneously and other things that the designer didn’t originally think would happen. I think it was huge for the industry to standardize on UVM, which codified those capabilities in System Verilog so users were no longer tied to those proprietary solutions and the fact that UVM is now the dominant methodology in the industry bears that out. As designs have become so much more complex, including having so much software content, I hope that Portable Stimulus will serve as the next catalyst to grow verification productivity.

Tom, you have been associated with Accellera for long & contributing to multiple standards in different capacities. How has been your experience working on standards?

My experience with standards has been entirely self-inflicted. It started when I was at Cadence and heard about a committee standardizing Verilog but that there were no Cadence people on the committee. I kind of joined by default, but it’s turned out to be a huge part of my career. Aside from meeting wonderful people like Cliff Cummings, Stu Sutherland and Dennis Brophy, my work on standards over the years has given me some unique insights into EDA tools too. I’ve always tried to balance my “user side,” where I want the standard to be something I could understand and use, with my “business side,” where I have to make sure that the standard is supportable by my company, so I’ve had to learn a lot more than someone in my position otherwise might about how the different simulators and other tools actually work. On a more practical note, working on standards committees has also helped me learn everything from object-oriented programming to Robert’s Rules of Order.

You have been one of the key drivers behind development of Portable Test and Stimulus Standard (PSS). How was your experience working on this standard compared to UVM?

Good question! UVM was much more of an exercise in turning existing technology into an industry standard, which involved getting buy-in from other stakeholders, including ultimately VMM advocates, but we didn’t really do a whole lot of “inventing.” That all happened mostly between Mentor and Cadence in developing the OVM originally. We also managed to bring most of the VMM Register Abstraction Layer (RAL) into UVM a bit later.

Portable Stimulus has been different for two reasons. First, I’m the vice-chair of the Working Group, so I’ve had to do a lot more planning than I did for UVM. The other is that, since the technology is relatively new, we had the challenge of combining the disparate capabilities and languages used by existing tools into a new declarative language that has different requirements from a procedural language like System Verilog. We spent a lot of time debating whether the standard should include a new language or whether we should just use a C++ library. It took some diplomacy, but we finally agreed to the compromise of defining the new language and semantics, and then producing a C++ library that could be used to create a model with the same semantics. To be honest, we could have played hardball and forced a vote to pick only one or the other, but we wanted to keep everyone on board. Since we made that decision, the working group has done a lot of really great work.

What are the top 2-3 challenges that you observe we as an industry need to solve in verification domain?

Remember when I said earlier that verification productivity is about more than simulator performance? Well, with designs as big and involved as they are today – and only going to get more so – we’re back at the point where you need a minimum amount of performance just to be able to simulate the designs to take advantage of things like UVM or Portable Stimulus without it taking days. This is actually part of the value of Portable Stimulus in that the engine can now be an emulator, FPGA prototype or even silicon and you can get both the performance to get results relatively quickly and the productivity as well.

The other big challenge I think is going to be increasing software content of designs. Back when I started, “embedded software” meant setting up the hardware registers and then letting the hardware do its thing. It made verification relatively easy because RTL represents physical hardware, which doesn’t spontaneously appear and disappear, like software. We’ve spent the last ten or so years learning how to use software techniques in verification to model the messy stuff that happens in the real world and making sure that the hardware would still operate correctly. When you start trying to verify a system that has software that could spontaneously spawn multiple threads to make something happen, it becomes much harder. Trying to get a handle on that for debug and other analysis is going to be a challenge.

But perhaps the biggest challenge is going to be just handling the huge amounts of data and scenarios that are going to have to be modelled. Think about an autonomous car, and all of the electronics that are going to have to be verified in an environment that needs to model lots of other cars, pedestrians, road hazards and tons of other stuff. When I let myself think about that, it seems like that could be a larger leap than we’ve made since I was still doing schematic capture and simulating with vectors. I continue to be blessed to now work for a company like Siemens, that is actively engaging this very problem.

Based on your vast experience, any words of wisdom to the practicing & aspiring verification engineers?

I used to work with a QA engineer who was great at finding bugs in our software. Whenever a new tool or user interface feature came out, he would always find bugs in it. When I asked him how he did it, he said he would try to find scenarios that the designer probably hadn’t thought about. That’s pretty much what verification is. Most design engineers are good enough that they can design a system to do the specific things they think about, even specific corner cases. But they can’t think of everything, especially with today’s (and tomorrow’s) designs. Unfortunately, if it’s hard for the design engineer to think of, it’s probably hard for the verification engineer to think of too. That’s why verification has become a software problem – because that’s the only way to create those unthought-of scenarios.

Thank you Tom for sharing insights & your thoughts.  
Many congratulations once again!!!


DVCon US 2019 - February 25-28, 2019 - Double Tree Hotel, San Jose, CA

Saturday, August 27, 2016

Quick chat with Wally : Keynote speaker, DVCon India 2016

Walden C. Rhines
It takes a village to raise a child! Correlating it with the growth of an engineer, YES! it does require Contribution from many & Collaboration with many. While our respective teams play the role of a family, the growth is accelerated when we Connect beyond these boundaries. DVCon India is one such platform to enable all of these for Design, verification & ESL community. The 3rd edition of DVCon India is planned on September 15-16 at Leela Palace, Bangalore.

The opening keynote on Day 1 is from Walden C. Rhines, CEO & Chairman, Mentor Graphics. It is always a pleasure to hear his insights on the Semiconductor & EDA industry. This year, he picked up an interesting topic – “Design Verification: Challenging Yesterday, Today and Tomorrow”. While we all wait with excitement to hear him on Sept 15, Wally was kind enough to share his thoughts on some queries that came up after I read the brief about his keynote. Below is an unedited version of the dialogue for you.

Wally your keynote topic is an excellent start to the program discussing the challenges head on. Tell us more about it?

Our industry has done a remarkable job of addressing rising complexity in terms of both design and verification productivity. What’s changed recently in verification is the emergence of a new set of requirements beyond the traditional functional domain. For example, we have added clocking, power, performance, and software requirements on top of the traditional functional requirements; and each of these new requirements that must be verified. While a continual development of new standards and methodologies has enabled us to keep pace with rising complexity and be productive, we are seeing that requirements for security and safety are becoming more important and could ultimately pose challenges more daunting than those we have faced in the past.

In the last few years ESL adoption has improved a lot. Is it the demand to move at higher abstraction level or convergence of diverse tool sets into a meaningful flow that is driving it?

Actually, a little of both. Historically, our industry has addressed complexity by raising abstraction when possible. For example, designers now have the option of using C, SystemC, or C++ as a design entry language combined with high-level synthesis to dramatically shorten the design and verification cycle by producing correct-by-construction, error-free, power-optimized RTL.

Moving beyond high-level synthesis, we are seeing new ESL design methodologies emerge that allow engineers to perform design optimizations on today’s advanced designs more quickly, efficiently, and cost-effectively than with traditional RTL methodologies by prototyping, debugging, and analyzing complex systems before the RTL stage.  ESL establishes a predictable, productive design process that leads to first-pass success when designs have become too massive and complex for success at the RTL stage.

The rise of IoT is stretching the design demands to far ends i.e. server class vs edge node devices. How does the EDA community view this problem statement?

Successful development of today’s Internet of Things products involves the convergence of best practices for system design that have evolved over the past 30 years. However, these practices were historically narrowly focused on specific requirements and concerns within a system. Today’s IoT ecosystems combine electronics, software, sensors, and actuator; where all are interconnected through a hierarchy of various complex levels of networking. At the lowest level, the edge node as you referred to it, advanced power management is fundamental for the IoT solution to succeed, while at the highest-level within the ecosystem, performance is equally critical. Obviously, EDA solutions exist today to design and verify each of these concerns within the IoT ecosystem. Yet more productivity can be achieved with more convergence of these solutions when possible.  For example, there is a need today to eliminate the development of multiple silos of verification environments that have traditionally existed across various verification engines—such as simulation, emulation, prototyping, and even real silicon used during post-silicon validation. In fact, work has begun with Accellera to develop a Portable Stimulus standard which will allow engineers to specify the verification intent once in terms of stimulus and checkers, which then can be retargeted though automation for a diverse set of verification engines.

Wally you seem to love India a lot! We see frequent references from you about the growing contribution of India to the global semiconductor community. Any specific trends that you would like to highlight?

Perhaps one of the most striking findings from our 2016 Wilson Research Group Functional Verification Study is how India is leading the world in terms of verification maturity. We can measure this phenomenon by looking at India’s adoption of  System Verilog and UVM compared to the rest of the world, as well as India’s adoption of various advanced functional verification techniques, such as constrained-random simulation, functional-coverage, and assertion-based techniques.

This is the 2nd time you would be delivering a keynote at DVCon India. What are your expectations from the conference?

I expect that the 2016 DVCon India will continue its outstanding success as a world-class conference, growing in both attendance and exhibitor participation, while delivering high-quality technical content and enlightening panel discussions.

Thank you Wally! We look forward to see you at DVCon India 2016.

Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Monday, May 23, 2016

.....of Errors & Mistakes in Verification

Miniaturization of devices has led to packing more functionality on the given slice of silicon. An after effect of that is heating of the device due to increased power consumption and discovering innovative ways of cooling off these components. As electronics adopted wireless, the concern on power came to forefront as, who wants to recharge the battery every second hour. Different techniques have been adopted since then to address this growing concern. One such technique is letting parts of silicon go to into hibernation and trigger a wake up when needed. My hibernation from blogging was no different except that though I received many pokes during this time probably the trigger wasn’t effective enough to tantalize the antennas of the blogger in me. It was only during a recent verification event hosted by Mentor Graphics when my friend Ruchir Dixit, Technical Director – India at Mentor Graphics introduced the event with an interesting thought touching the basics of verification. The message completely resonates the idea of this blog of exploring verification randomly but rooted on basics and I took it as a sign to get the ball rolling again. To start with, I am sharing the thoughts that actuated this restart. Thank you Ruchir for allowing me to share the same.


Source: Slides from Ruchir Dixit - 'Verification Focus & Vision' presented at Verification Forum, Mentor Graphics, India

Before we unfold the topic further have you ever thought as to why computers only spell out ERRORS & not MISTAKES?

Let’s start with understanding the basic difference between an error & a mistake. A mistake is usually a choice that turns out to be wrong because the outcome is wrong. Mistakes are made when a free choice is made either accidentally or performance based but can be prevented or corrected. An error, on the other hand, is a violation of a golden reference or set of rules that would have lead to a different action and outcome.  Errors typically are a result of lack of knowledge and not choice. That is the reason that computer doesn’t make mistakes and only throws error on screen when unable to move forward on a pre-defined set of actions or sees a violation to them. And that is again a reason why you see Warnings & Errors from our EDA tools and not Mistakes :) Machines don’t make mistakes… we do!

Now talking about verification, the sole reason of why we verify is BUGS! And the source of these BUGS are the ERRORS & MISTAKES committed as part of code development.

Mistakes as we understood earlier is resultant of a free choice. While no one wants to make a bad choice, still this creeps into the code due to distractions or coding in a hurry. To prevent or correct such mistakes it is the basic discipline one needs to follow and that is where the EDA tools come to rescue in assisting you to make the right choice.
 
Errors typically happen due to ignorance about the subject or partial knowledge leading to wrong assumptions. This could further find it roots in incomplete documentation or incorrect understanding of the subject. Given that documentation & the resulting conclusions are more subjective it is hard to define the right way to document anything. The only way to minimize errors is to prevent them from occurring by defining clear set of rules that need to be followed and that is where ‘Methodology’ comes into picture. A classic example of the same is having a template generator for UVM code to ensure the code is correct by construction & integrates seamlessly at different levels. Having coding guidelines is another way to reduce errors. Uncovering the rest of the errors is where the tests become important and unless we stimulate that scenario we may not know what & where the error is.

So while errors & mistakes are unavoidable, it is the deployment of the right set of methodologies and tools that leads to a bug free silicon …. In time…. Every time!

After writing this post, I was tempted to say that ‘To ERR is HUMAN and to FORGIVE or VERIFY is DIVINE! 

But then that would be a MISTAKE again :)

Happy Bug Hunting!!!


Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Sunday, March 2, 2014

Back to the basics : Power Aware Verification - 1

The famous quote from the movie 'Spiderman' – “With great power comes great responsibility” gets convoluted when applied to the semiconductor industry where “With LOW power comes great responsibility”. The design cycle that once focused on miniaturization, shifted gears to achieve higher performance in the PC era and now in the world of connected devices it is POWER that commands most attention. Energy consumption today drives the operational expense for servers and data centers so much so that companies like Facebook are setting up data centers close to Arctic Circle where the cold weather reduces cooling expense. On the other hand, for consumer devices ‘meantime between charging’ is considered as one of the key factors defining user experience. This means that environment friendly green products resulting from attention to low power in the design cycle can bring down the overall cooling/packaging cost of a system, reduce probability of system failure and conserve energy.

DESIGN FOR LOW POWER

The existing hardware description languages like Verilog & VHDL fall short of semantics to describe the low power intent of a design. These languages were primarily defined to represent the functional intent of a circuit. Early adopters to the low power design methodology had to manually insert cells during the development phase to achieve the desired results. This process was error prone with limited automation and almost no EDA support. Archpro, an EDA startup (acquired by Synopsys in 2007) provided one of the early solutions to this space. Unified Power Format (UPF – IEEE 1801) and Common Power Format (CPF from Si2) are the two TCL based design representation approaches available today to define a low power design intent. All EDA vendors support either or both formats to aid development of power aware silicon.

VERIFYING LOW POWER DESIGN

Traditional simulators are tuned to HDLs i.e. logic 1/0 and do not have a notion of voltage or power turning on/off. For the first generation of low power designs, where cells were introduced manually, the verification used to be mostly script based by forcing X (unknown) on the internal nodes of the design and verifying the outcome. Later, when power formats were adopted for design representation, the verification of this intent demanded additional support from the simulators such as –

- Emulating the cells like isolation, state retention etc. during the RTL simulations to verify the power sequencing features of the design. These cells are otherwise inserted into the netlist by the synthesis tool taking the power format representation as the base
- Simulating power ON/OFF scenarios such that the block that is turned off has all outputs going X (unknown)
- Simulating the voltage ramp up cycle which means once the power is turned ON, it takes some time for the voltage to ramp up to the desired level and during this period the functionality is not guaranteed
- Simulating multi voltage scenarios in the design and in absence of level shifter cells the relevant signals are corrupted
- Simulating all of the above together resulting into a real use case scenario

Tools from Archpro (MVRC & MVSIM) worked with industry standard simulators through PLI to simulate power aware designs. Today all industry standard simulators have the feature to verify such design with limited or complete support to the UPF & CPF feature list. Formal/Static tools are available to perform quick structural checks to the design targeting correct placement of cells, correct choice of cells, correct connections to the cells and ensuring power integrity of the design based on the power format definition at RTL and netlist level. Dynamic simulations further ensure that the power intent is implemented as per architecture by simulating the power states of the design as functional scenarios. 

CONCLUSION

In the last decade, the industry has collaborated at different levels to realize power aware design for different applications. Low power was adopted for the products targeting the consumer and handheld markets initially but today it is pervasive across all segments that the semiconductor industry serves. The key is to define the low power intent early and incorporate tools to validate that the intent is maintained all throughout. As a result, low power lead to greater responsibility for all stakeholders to the design cycle in general and for the verification engineers in particular!

DVCON 2014 issue of Verification Horizons has an article “Taming power aware bugs with Questa” co-authored by me on this subject.


Drop in your questions and experiences with low power designs in the comment section or write to me at siddhakarana@gmail.com

Monday, October 14, 2013

Trishool for verification

It’s the time of the year when I try to correlate Mythology with Verification. Yes, festive season is back in India and this is the time when we celebrate the fact that good prevails over evil. Given the diversity of Indian culture, there are a variety of mythological stories about demigods taking over evil. Well for us in the verification domain, it is the BUG that plays the evil preventing us from achieving first silicon success. While consumerism of electronic devices worked wonders in increasing the size of business, it actually forced the ASIC teams to gear up for developing better products in shrinking schedules. Given that verification is the long pole in achieving this goal, verification teams need a solution that ensures the functionality on chip is intact in a time bound fashion. The rising nature of design complexity has further transformed verification into a diverse problem where a single tool or methodology is unable to solve it. Clearly a multi pronged approach.... a TRISHOOL is required!
 
 
TRISHOOL is a Sanskrit word meaning 'three spears'. The symbol is polyvalent and is wielded by the Hindu God Shiva and Goddess Durga. Even the Greek God of sea Poseidon and Neptune  the Roman God of the sea, are known to carry it. The three points have various meanings and significance. One of the common explanations being that Lord Shiva uses the Trishool to destroy the three worlds: the physical world, the world of culture drawn from the past and the world of the mind representing the processes of sensing and acting. In physical sense, the three spears would actually be fatal as compared to a single one. 
 
So basically we need a verification strategy equivalent to a Trishool to confirm that the efforts converge in rendering a fatal blow to the hidden bugs. The verification tools available today correlate well with the spears of Trishool and if put together would weed out bugs in a staged manner moving from IP to SoC verification. So which are the 3 main tools that should be part of this strategy to nail down the complex problem we are dealing with today?
 
Constrained RandomVerification (CRV) is the workhorse for IP verification & reuse of the efforts at SoC level makes it as the main spear or the central spear of the verification Trishool strategy. CRV is further complimented by the other two spears on either side i.e. Formal Verification and Graph based Verification. The focus of CRV is more at the IP level wherein the design is stressed under constraints to attack the thought through features (verification plan & coverage) and also hit corner cases or areas not comprehended. As we move from IP to SoC level, we face two major challenges. One is repetitive code typically combinational in nature where Formal techniques can prove the functionality comprehensively and in limited simulation cycles. Second is developing SoC level scenarios that cover all aspects of SoC and not just integration of IPs. The third spear or graph based verification comes to rescue at this level where multi processor integration, use cases, low power scenarios and performance simulations are enabled with much ease. 
 
Hope this festive season while we enjoy the food & sweets, the stories and enacts that we experience around will also given enough food for thought towards a Trishool strategy for verification.
 
Happy Dussehra!!!
 
Related posts -

Sunday, June 23, 2013

Leveraging Verification manager tools for objective closure

Shrinking schedules topped with increasing expectations in terms of more functionality opened up gates for reusability in the semiconductor industry. Reusability (internal or external) is constantly on a rise both in design & verification. Following are some of the trends on reusability presented by Harry Foster at DVCLUB UK, 2013 based on Wilson Research Group study in 2012, commissioned by Mentor Graphics.
 



Both IP and SOC now demand periodic releases targeting specific features for a customer or a particular product category. It is important to be objective in terms of verifying the design for the given context to ensure the latest verification tools & methodologies do not dismiss the required focus. With verification claiming most of ASIC design schedule in terms of efforts & time, conventional schemes fail in managing verification progress and extending predictable closure. There is a need for a platform that helps in directing the focus of CRV, brings in automation around coverage, provides initial triaging of failures and aids in methodical verification closure. While a lot of this has been done using in house developed scripts, there is a significant time spent in maintaining it. There are multiple solutions available in the market and the beauty of being into consulting is that you get to play around with most of them considering customer preferences towards a particular EDA flow.
 
QVM (Questa Verification Manager) is one such platform provided by Mentor Graphics. I recently co-authored an article (QVM : Enabling Organized, Predictable and Faster Verification Closure) published in Verification Horizons, DAC 2013 edition. This is available on Verification academy or you can download the paper here too.

Sunday, January 27, 2013

Evolution of the test bench - Part 2

In the last post, we looked into the directed verification approach where, the test benches were typically dumb while the tests comprised of stimuli and monitors. The progress on verification was in linear relationship with the no. of tests developed and passing. There was no concept of functional coverage and even the usage of code coverage was limited. Apart from HDLs, programming languages like C & C++ continued to support the verification infrastructure. Managing the growing complexity and constant pressure to reduce the design schedule demanded an alternate approach for verification. This gave birth to a new breed of languages – HVLs (Hardware Verification Languages).
 
HVLs
 
The first one in this category was introduced by Verisity popularly known as ‘e’ language. The base of this language was AOP (Aspect Oriented Programming) and required a separate tool (Specman) in addition to the simulator. This language spear headed the entry of HVLs into Verification and was followed by ‘Vera’ that was based on OOP (Object Oriented Programming) promoted by Synopsys. Along with these two languages, SystemC tried to penetrate this domain with support from multiple EDA vendors but couldn’t really gain wide acceptance. The main idea promoted by all these languages was CRV (Constrained Random Verification). The philosophy was to empower the test bench with all features of drivers, monitors, checkers and a library of sequences/scenarios. The generation of tests was automated with the state space exploration guided by constraints and progress measured using functional coverage.
 
Methodologies
 
As adoption of these languages spread, the early adopters started building proprietary methodologies around them. To modularize development, BCLs (Base Class Libraries) were developed by each organization. Maintaining local libraries and continuously improving them while ensuring simulator compatibility was not a sustainable solution. The EDA vendors came forward with methodologies for each of these languages to resolve the above issue and standardize the usage of language. Verisity led the show with eRM (e Reuse Methodology) followed by RVM (Reference Verification Methodology) from Synopsys. These methodologies helped in putting together a process to move from block to chip level and across projects in an organized manner thereby laying the foundation for reuse. Though verification was progressing at a fast pace with these entrants, there were some inherent issues with these solutions that left the industry wanting for something more. The drawbacks include –
 
- Requirement for an additional tool license beyond simulator
- Efficiency of simulator took a toll because of passing the control back & forth to this additional tool
- These solutions had limited portability across simulators
- As reusability picked up, finding VIPs based on the HVL was difficult
- Hardware accelerators started picking up and these HVL couldn’t compliment it completely
- Ramp up time for engineers moving across organizations was high
 
System Verilog
 
To move to the next level of standardization, Accellera decided to improve on Verilog instead of driving e or Vera as industry standard. This led to the birth of System Verilog which proved to be a game changer in multiple respects. The primary motivation behind driving SV was to have a common language for design & verification to address the issues with other HVLs. Initial thrust to System Verilog came in from Synopsys by declaring Vera as open source and extending its contribution to definition of System Verilog for verification. Further Synopsys in association with ARM moved RVM to VMM (Verification Methodology Manual) based on System Verilog providing a framework for early adopters. With IEEE recognizing SV as a standard (1800) in 2005 the acceptance rate increased further. By this time Cadence acquired Verisity after its quest of promoting SystemC as a verification language. eRM was transformed to URM (Universal Reuse Methodology) that supported e, SystemC and System Verilog. This was followed by Mentor proposing AVM (Advanced Verification Methodology) supporting System Verilog & SystemC.  Though System Verilog settled the dust by claiming maximum footprint across organizations, availability of multiple methodologies introduced inertia to industry wide reusability. The major issues faced include –
 
- Learning a new methodology almost every 18 months
- The methodologies had limited portability across simulators
- Verification env developed using VIP from 1 vendor not easily portable to another
- Teams confused in terms of road maps for these methodologies based on industry adoption
 
Road to UVM
 
To tone down this problem, Mentor and Cadence merged their methodologies and came up with OVM (Open Verification Methodology) while Synopsys continued to stick to VMM. Though the problem was reduced, still there was a need for a common methodology and Accellera took the initiative to develop one. UVM (Universal Verification Methodology) largely based on OVM and deriving featured from VMM was finally introduced. While IEEE recognized ‘e’ as an standard (1647) in 2011, it was already too late. Functional coverage, assertion coverage and code coverage all joined together to provide the quantitative metrics to answer ‘are we done’ giving rise to CDV (Coverage Driven Verification).
 
Suggested Reading - Standards War

Saturday, September 15, 2012

Communicating BUGS or BUGgy Communication

A few decades back when the designs had limited gate count, designers used to verify their code themselves. With design complexity increasing, the verification engineers were introduced to the ASIC teams. As Moore’s law continues to drive the complexity further, IP reuse picked up and this ensured that the engineers are spread all around the globe and communication across geographies is inevitable.
 
The reason for introducing the verification engineer was BUGS. A lot has been written and discussed on verification but reference to bugs is limited. With IPs getting sourced from different parts of the world and companies having extended teams everywhere, communicating the BUGs effectively becomes all the more important. “Wrong assumptions are costly” and in semiconductor industry this can throw a team out of business completely. Recently, in a mid-program verification audit for a startup with teams sitting between US & India, I realized that putting a well defined structure on communicating bugs could improve the turnaround time tremendously. Due to different time zones and working style, there was a lot of back & forth communication between the team members. Having a well defined format for communicating bugs helped a lot.
 
BUG COMMUNICATION CYCLE
 
BUG reported à BUG reproduced by designer à BUG fixed à FIX validated à BUG closed à revisit later for review, reproducing when required or data mining.
 
APPROACH
 
Before the advent of CRV, directed verification approach was common. Communicating bugs was simple and required limited information.  Introduction of CRV helped in the finding the bugs faster but sharing and preserving information around bugs became complicated. With a well defined approach, this problem can be moderated. Here is a sample format on what should be taken care of at each stage of the above cycle –
 
BUG reporting
 
Defining mnemonics for blocks or data path or scenarios etc. that can be appended along with the bug title helps. This enables categorizing the bugs at any time during project execution.
 
While the tool enforces adding the project related information, severity, prioritization and contact details of all concerned to broadcast information, the detailed description section should include –
 
- Brief description of the issue
- Information on diagnosis based on the debug
- Test case(s) used to discover this bug
- Seed value(s) for which the bug is surfaced
- Command line option to simulate the scenario and reproduce the issue
- Testbench Changelist/TAG on which the issue was diagnosed
- RTL Changelist/ TAG on which this issue was diagnosed
- Assertion/Cover point that covers this bug
- Link to available logs & dump that has the failure
 
BUG fixing
 
After the designer has root caused and fixed the bug, the bug report needs to be updated with –
 
- Root cause analysis and the fix
- Files affected during this fix
- RTL changelist/TAG that has the fix
 
FIX Validation
 
After the BUG moves to fixed stage, the verification engineer needs to update the TB if required and rerun the test. The test should pass with the seed value(s) required and then with multiple random seeds. The assertion/cover point should be covered in these runs multiple times. With this, the report can be updated further with –
 
- RTL changelist/TAG used to validate the bug
- Testbench changelist/TAG on which the issue was validated
- Pointer to the logs & waveforms of the validated test (if required to be preserved)
 
BUG Closed
 
After validating the fix, the bug can be moved to closed state. The verification engineer needs to check if there is a need to move this test to the smoke test list/mini regression and update the list accordingly.
 
Following the above approach would really help to communicate, reproduce and preserve the information around BUGs. The rising intricacies of the design demand a disciplined approach to attain efficiency in the outcome. Communicating bugs is the first step towards it!
 
HAPPY BUG HUNTING!!!

Sunday, April 29, 2012

Verification claims 70% of the chip design schedule!

Human psychology points to the fact that constant repetition of any statement registers the same into sub-conscious mind and we start believing into it. The statement, “Verification claims 70% of the schedule” has been floating around in articles, keynotes and discussions for almost 2 decades so much that even in absence of any data validating it, we believed it as a fact for a long time now. However, the progress in verification domain indicate that this number might actually by a "FACT".

20 years back, the designs were few K gates and the design team verified the RTL themselves. The test benches and tests were all developed in HDLs and sophisticated verification environment was not even part of the discussions. It was assumed that the verification accounted for roughly 50% of the effort.

Since then, the design complexity has grown exponentially and state of the art test benches with lot of metrics have pre empted legacy verification. Instead of designers, a team of verification engineers is deployed on each project to overcome the cumbersome task. Verification still continues to be an endless task demanding aggressive adoption of new techniques quite frequently.

A quick glance at the task list of verification team shows following items –
- Development of metric driven verification plan based on the specifications.
- Development of HVL+Methodology based constrained random test benches.
- Development of directed test benches for verifying processor integration in SoC.
- Power aware simulations.
- Analog mixed signal simulations.
- Debugging failures and regressing the design.
- Add tests to meet coverage goals (code, functional & assertions).
- Formal verification.
- Emulation/Hardware acceleration to speed up the turnaround time.
- Performance testing and usecases.
- Gate level simulations with different corners.
- Test vector development for post silicon validation.

The above list doesn’t include modeling for virtual platforms as it is still in early adopter stage. Along with the verification team, significant quanta of cycles are added by the design team towards debugging. If we try to quantify the CPU cycles required for verification on any project, the figures would easily over shadow any other task of the ASIC design cycle.

Excerpts from the Wilson Research study (commissioned by Mentor) indicate interesting data (approximated) –
- The industry adoption of code coverage has increased to 72 percent by 2010.
- The industry adoption of assertions had increased to 72 percent by 2010.
- Functional coverage adoption grew from 40% to 72% from 2007 to 2010.
- Constrained-random simulation techniques grew from 41% in 2007 to 69% in 2010.
- The industry adoption of formal property checking has increased by 53% from 2007 to 2010.
- Adoption of HW assisted acceleration/emulation increased by 75% from 2007 to 2010.
- Mean time a designer spends in verification has increased from an average of 46% in 2007 to 50% in 2010.
- Average verification team size grew by a whopping 58% during this period.
- 52% of chip failures were still due to functional problems.
- 66% of projects conitnue to be behind schedule. 45% of chips require two silicon passes and 25% require more than two passes.

While the biggies of the EDA industry are evolving the tools incessantly, a brigade of startups has surfaced with each trying to check this exorbitant problem of verification. The solutions are attacking the problem from multiple perspectives. Some of them are trying to shorten the regressions cycle, some moving the task from engineers to tools, some providing data mining while others providing guidance to reduce the overall efforts.

The semiconductor industry is continuously defining ways to control the volume of verification not only by adding new tools or techniques but redefining the ecosystem and collaborating at various levels. The steep rise in the usage of IPs (e.g. ARM’s market valuation reaching $12 billion, and Semico reporting the third-party IP market grew by close to 22 percent) and VIPs (read posts 1, 2, 3) is a clear indicative of this fact.

So much has been added to the arsenal of verification teams and their ownership in the ASIC design cycle that one can safely assume the verification efforts having moved from 50% in early 90s to 70% now. And since the process is still ON, it would be interesting to see if this magic figure of 70% still persist or moves up further!!!

Saturday, March 31, 2012

Choosing the right VIP

In past few months, while interacting with customers, I came across a couple of cases where the VIP played a spoilsport. In one case, the IP & VIP were procured from the same vendor during the early phase of standard protocol evolution. One of the key USPs of the product was this IP and the demonstration of the complete reference design was anticipated at a global computing event. The limitations of the VIP were revealed quite late during SoC verification. The team struggled hard to manage the bug rate in the IP itself during the SoC verification phase. Finally the product got delayed to an extent where the company (~startup) couldn’t manage to sustain and went for an asset sale. The problem wasn’t selecting the IP & VIP from the same vendor but lack of home work and maybe a wrong business decision during the selection process.

In another case, an in house developed IP having seen multiple tapeouts was getting validated on a new SoC architecture targeting a different application. The VIP was third party which had been used to verify the IP throughout. After integration testing at SoC level, while developing usecase scenarios, the limitation of the VIP came forward. Since the schedule didn’t allow the liberty to add features to the VIP and validate this scenario, the team went ahead with the tapeout. Unfortunately the application didn’t work on silicon and the root cause analysis revealed a bug in the IP itself. Result, re spin required.

Showstopper bugs, Project cancellations, Tapeout delays etc. all point to missing the market window and the post mortem may point figure to the VIP!!!

Selecting the right VIP demands a detailed evaluation process along with a comprehensive checklist to compare the solution from different vendors. To find available solutions in the market, www.chipestimate.com and www.design-reuse.com are quite useful. After identifying the set of applicable vendors, it is important to  collect relevant information for initial analysis –

- Detail documentation i.e. reference manual (architecture of VIP) and user manual (how to use it). These documents should list out all aspects of the VIP functionality including parameterizable sequences, component configurations, APIs, parameters, coverage, error messages etc.
- Understanding the process of VIP development from vendor i.e. how is the VIP validated, the maturity process, how many and what kind of design it has verified. The release process, bug reporting mechanism, turn around time for fixing bugs, commitment and plans to support the evolving standards.
- Vendor overall VIP portfolio, languages & methodology used, membership & participation to the relevant standards, development/support staff availability and domain expertise, level of onsite support possible.
- Standard compliance, compliance coverage report, items not supported in the current release, list of assertions, code & functional coverage report.
- Support across methodologies, simulators, accelerators, formal verification engines and ESL verification.
- Exiting customer references.

A detailed comparative study of the above data will help in narrowing down the search to final 1 or 2 vendors. The next phase involves qualitative analysis to validate the data provided by the vendor and the assumptions made during first phase by the evaluator. Here the engineer(s) needs to play around with the VIP using the examples provided by the vendor / test in target env / create a short circuit env with master-slave agents of VIP and try different scenarios to evaluate –

- Ease of use in terms of configuring the components, using the sequences, portability from block to system level, how would it work when simulated in parallel with other VIPs in the SoC verification environment.
- Does it meet the requirements of the metrics planned for determining verification completion and to what extent does the VIP inherently reports it by itself.
- APIs provided to develop tests for customized functionality of the IP, developing real life scenarios, directed tests, stress testing, errors injection & detection capability, debug and observability.

Interestingly the experience with the vendor during the evaluation phase indicates the level of support that can be expected further. The key to this two phase evaluation process is the checklist. VSIA provides a comprehensive QIP metrics (Final VSI edition) that provides a jump start to this evaluation process.

As it is said - "the devil is in the details", an additional effort during the start without overlooking the finer points can prevent from serious problems later. 

It would be interesting to learn your VIP evaluation experience. Drop a comment now!

Related posts -
Verification IP : Changing landscape - Part 1
Verification IP : Changing landscape - Part 2