Friday, September 9, 2016

Quick chat with Alok Jain : Keynote speaker, DVCon India 2016

Alok Jain
All of us have heard the story of a woodcutter and the importance of the quote “Sharpen your axe”. It applies well to everything we do including verification. Two decades back, the focus of a verification engineer was predominantly on “What to Verify”. As complexity grew “How to Verify” became equally important. To enable this, EDA teams rolled out multiple technologies & methodologies. As we try to assimilate & integrate these flows amidst first time silicon & cost pressure, it is important for us to sharpen our axe through continuous learning, applying the right tool for the right job and applying it effectively.

Alok Jain, Senior Group Director in the Advanced Verification Division at Cadence would be discussing on similar lines as part of his DV track keynote on Day 1 at DVCon India 2016. With 20+ years of industry experience, Alok leads the Advanced Verification Division at Cadence India. Having associated with different technologies around verification in the past 2 decades, Alok candidly shared his views on the challenges beyond complexity that verification teams need to focus on. Here is a curtain raiser for his talk "Verification of complex SoCs" 

Alok your keynote topic focuses on challenges in verification beyond the complexity resulting from Moore’s law. Tell us more about it?

The keynote is going to focus on challenges and potential solution for verification of complex SoCs. Verifying a complex SoC consisting of tens of embedded cores and hundreds of IPs is a major challenge in the industry today. One of the big challenges is performance and capacity. Given the size and complexity of modern SoCs, tests can run for 18-24 hours or even more. One has to figure out how to get the best verification throughput. Another challenge is generation of test benches and tests. The test benches have to be developed in a way which can achieve good performance in both simulation and hardware acceleration. Tests have to be created that stress the SoC under the application use cases, low power scenarios, and multi-core coherency scenarios. The tests have to be re-usable across pre-silicon and post-silicon verification and validation platforms. Yet another challenge is coverage. One has to measure verification coverage across formal, simulation, and acceleration platforms at the SoC level to know when you are done. The final challenge is how to effectively debug across RTL, test bench, and embedded software on multiple verification platforms.

In the last decade, advancements in verification was focused primarily on unifying HVL(s) & methodologies. What changes do you foresee in verification flows ‘Beyond UVM’?

UVM is very well suited for IP, Sub-system and some specific aspects of SoC verification. However, UVM is not the best approach for general SoC verification. UVM is essentially developed for “bottom-up” verification where the focus is on trying to exhaustively verify IP/sub-systems. SoCs require a more “top-down” verification where the focus is on stressing the SoC under important application use cases. There is a need to reuse SoC content across simulation, emulation, FPGA and post-silicon. UVM is optimized for simulation and is too slow and heavy for high speed platforms. Finally, there is a need to drive software stimulus on CPUs in coordination with hardware interfaces. It is difficult in UVM to drive and control software and hardware interfaces. All this is asking us to explore options beyond UVM. The keynote will cover some more insights into options beyond UVM.

The rise of IoT is stretching the design demands to far ends i.e. server class vs edge node devices. How do you see verification flows catering to these demands?

Several of the requirements for IoT verification are similar to the ones for complex SoCs. But then there are some unique additional requirements from the IoT world. The first is simply the cost of verification. For complex SoCs, the cost of verification has been steadily rising. For IoT applications, one has to consider alternative methods and flows that can reduce the cost. One option is to use some form of a correct by construction approach where the design is specifically done in a way to enable a simpler form of verification. Another approach is to put much more emphasis on reuse. This includes horizontal reuse which is portability across multiple platforms and vertical reuse which is reuse from IP to sub-system to SoC. Another requirement is verification throughput for design with considerably more analog, mixed signal and low power content. Finally, one has to devise verification techniques and flows that can cater to the security and safety requirements of modern IoT applications.

Formal took a while to become mainstream. The rise of Apps in Formal seems to have accelerated this adoption. What’s your view on this?

Yes, I do agree that Apps has considerably accelerated the pace of adoption of formal. Traditionally, formal tools have been developed and used by formal PhDs and experts. The main charter and motivation of these experts was to solve the coolest and hardest problems in formal verification. It was only after some time that both sides (developers and users) started realizing that formal can be used in a much more practical and usable way by engineers to solve specific problems. This lead to the development of various formal apps which greatly enabled the mainstream usage of formal.

This is the 3rd edition of DVCon India. What are your expectations from the conference?

I am expecting to attend keynotes, technical papers and panel discussions that give me an understanding of some the latest work in the domain of design and verification of IPs, sub-systems and SoCs. In addition, I am looking forward to the opportunity to network with some of my peers from the industry and academia.

Thank you Alok!


Come join us in this exciting journey to contribute, collaborate, connect & celebrate @ DVCon India 2016!

Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Saturday, September 3, 2016

Quick chat with Sushil : Keynote speaker, DVCon India 2016

Sushil Gupta
A very famous urdu verse that translates  translates to “When I started I was alone, slowly others joined and a caravan formed” truly describes the plethora of challenges in SoC verification that continues to abound as the design complexity marches north. It started with growing logic on the silicon and moved to performance before power took over. While we still juggle up to handle the PPA implications, time to market pressure with cost effective secure customized solutions further add enough spice to the problem.

Sushil Gupta, Group Director in the Verification group at Synopsys covers these problems & potential solutions in his keynote titled “Today’s SoC Verification Challenges: Mobile and Beyond” on Day 2 of DVCon India 2016. Sushil joined Synopsys in 2015 as part of acquisition of Atrenta. He has 30 years of industry experience which spans various roles in engineering management and leadership in EDA and VLSI Design companies. Here is a quick excerpt of the conversation with Sushil around this topic –

Sushil your keynote topic focuses on challenges in verification associated with the next generation of SoCs. Tell us more about it?

We have seen the chip design industry shift its focus from computers and networking into System on Chips (SoC) for mobility – smartphones, tablets, and other consumer devices. The next wave of SoCs go beyond mobility into IoT, automotive, robotics, etc. These SoCs integrate hundreds of functions into a single chip and a complete software stack with drivers, operating system, etc.. The result is 10X increase in verification complexity in continually shrinking market windows. My talk focuses on these challenges and how verification solutions must scale to address them effectively.

Reuse of IP/Subsystems is the key trend with SoCs today. Do you think that reuse from third party add to challenges in verification? If yes, how?

IP/sub-system reuse (both third party and in-house) helps accelerate the integration of multiple functions into a single chip. However, these IP/sub-systems can come from multiple sources with heterogeneous design and verification flows. The resulting SoCs are extremely complex with  millions of lines of RTL and testbench, protocols, assertions, clock and power domains, and billions of cycles of OS boot.

Do you think progress in verification methodologies & flows have reached to a point where consolidation is key to allow verification engineer use the best of each? Any specific trends that you would like to highlight on this?

Integrated verification platforms are key to verification convergence. Verification now extends beyond functional verification into low power verification, debug automation, static  and formal verification, early software bring-up and emerging challenges with safety, security and privacy. This requires not only best-in-class verification tools and engines, but also native integrations between the tools to enable seamless transitions and faster convergence.

Sushil you have had a significant stint with formal at Atrenta. What are your thoughts on adoption of Formal coming to mainstream? How does the trend looks moving forward?

Formal is fast becoming mainstream because it can catch bugs that are otherwise very difficult to detect. Advancements in performance, debug and capacity of formal verification tools has enabled formal to become an integral part of a comprehensive SoC verification flow. The emergence of formal ‘Apps’ for clock and reset domains, low power, connectivity, sequential equivalence, coverage exclusions, etc. has enabled a broad range of design and verification engineers to benefit from formal verification without the need to be a formal “expert”.   

This is the 3rd edition of DVCon India. What are your expectations from the conference?

Speaking from my own experience having started my career with TI India in 1986, India has a very rich design and verification expertise. I hope to learn about the latest challenges and innovations in verification and look forward to working with our customers and partners on new breakthroughs.

Thank you Sushil!

Join us on Day 2 (Sept 16) of DVCon India 2016  at Leela Palace, Bangalore to attend this keynote and other exciting topics.

Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Saturday, August 27, 2016

Quick chat with Wally : Keynote speaker, DVCon India 2016

Walden C. Rhines
It takes a village to raise a child! Correlating it with the growth of an engineer, YES! it does require Contribution from many & Collaboration with many. While our respective teams play the role of a family, the growth is accelerated when we Connect beyond these boundaries. DVCon India is one such platform to enable all of these for Design, verification & ESL community. The 3rd edition of DVCon India is planned on September 15-16 at Leela Palace, Bangalore.

The opening keynote on Day 1 is from Walden C. Rhines, CEO & Chairman, Mentor Graphics. It is always a pleasure to hear his insights on the Semiconductor & EDA industry. This year, he picked up an interesting topic – “Design Verification: Challenging Yesterday, Today and Tomorrow”. While we all wait with excitement to hear him on Sept 15, Wally was kind enough to share his thoughts on some queries that came up after I read the brief about his keynote. Below is an unedited version of the dialogue for you.

Wally your keynote topic is an excellent start to the program discussing the challenges head on. Tell us more about it?

Our industry has done a remarkable job of addressing rising complexity in terms of both design and verification productivity. What’s changed recently in verification is the emergence of a new set of requirements beyond the traditional functional domain. For example, we have added clocking, power, performance, and software requirements on top of the traditional functional requirements; and each of these new requirements that must be verified. While a continual development of new standards and methodologies has enabled us to keep pace with rising complexity and be productive, we are seeing that requirements for security and safety are becoming more important and could ultimately pose challenges more daunting than those we have faced in the past.

In the last few years ESL adoption has improved a lot. Is it the demand to move at higher abstraction level or convergence of diverse tool sets into a meaningful flow that is driving it?

Actually, a little of both. Historically, our industry has addressed complexity by raising abstraction when possible. For example, designers now have the option of using C, SystemC, or C++ as a design entry language combined with high-level synthesis to dramatically shorten the design and verification cycle by producing correct-by-construction, error-free, power-optimized RTL.

Moving beyond high-level synthesis, we are seeing new ESL design methodologies emerge that allow engineers to perform design optimizations on today’s advanced designs more quickly, efficiently, and cost-effectively than with traditional RTL methodologies by prototyping, debugging, and analyzing complex systems before the RTL stage.  ESL establishes a predictable, productive design process that leads to first-pass success when designs have become too massive and complex for success at the RTL stage.

The rise of IoT is stretching the design demands to far ends i.e. server class vs edge node devices. How does the EDA community view this problem statement?

Successful development of today’s Internet of Things products involves the convergence of best practices for system design that have evolved over the past 30 years. However, these practices were historically narrowly focused on specific requirements and concerns within a system. Today’s IoT ecosystems combine electronics, software, sensors, and actuator; where all are interconnected through a hierarchy of various complex levels of networking. At the lowest level, the edge node as you referred to it, advanced power management is fundamental for the IoT solution to succeed, while at the highest-level within the ecosystem, performance is equally critical. Obviously, EDA solutions exist today to design and verify each of these concerns within the IoT ecosystem. Yet more productivity can be achieved with more convergence of these solutions when possible.  For example, there is a need today to eliminate the development of multiple silos of verification environments that have traditionally existed across various verification engines—such as simulation, emulation, prototyping, and even real silicon used during post-silicon validation. In fact, work has begun with Accellera to develop a Portable Stimulus standard which will allow engineers to specify the verification intent once in terms of stimulus and checkers, which then can be retargeted though automation for a diverse set of verification engines.

Wally you seem to love India a lot! We see frequent references from you about the growing contribution of India to the global semiconductor community. Any specific trends that you would like to highlight?

Perhaps one of the most striking findings from our 2016 Wilson Research Group Functional Verification Study is how India is leading the world in terms of verification maturity. We can measure this phenomenon by looking at India’s adoption of  System Verilog and UVM compared to the rest of the world, as well as India’s adoption of various advanced functional verification techniques, such as constrained-random simulation, functional-coverage, and assertion-based techniques.

This is the 2nd time you would be delivering a keynote at DVCon India. What are your expectations from the conference?

I expect that the 2016 DVCon India will continue its outstanding success as a world-class conference, growing in both attendance and exhibitor participation, while delivering high-quality technical content and enlightening panel discussions.

Thank you Wally! We look forward to see you at DVCon India 2016.

Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Sunday, June 26, 2016

Marlin & Dory way of 'Finding Bugs'

On our return after watching ‘Finding Dory’, my son asked, “Dad, if you were to find Dory would you be able to do that”? I said, “Ofcourse”! Next, came HOW? I reminded him that my job is to Find Bugs and so I know the tricks of the game already. That made him super excited and wanting to know more about it. Given that this time the reference was picked by him, I decided to continue the same to explain him further.

In the movie, Marlin and Nemo were finding Dory inside the Marine Life Institute (MLI), likewise, we find bugs inside the design called as System on Chip (SoC). The SoC has a lot of similarity to MLI in the sense that it is big and complex. As MLI had different sections, our SoC has different blocks where Dory (Bug) can be found. Also it is not only the sections but the inter-connections that are equally important. When we look for Dory (Bug) inside these blocks we call it IP verification and when our focus is on the inter-connections we call it Integration or SoC Verification.

Image Source : http://www.socwall.com/desktop-wallpaper/7462/dory-and-marlin-by-ryone/

We start off our quest using the Marlin way i.e. “Assess the situation, evaluate, and plan it out”. We call it the Directed Verification approach wherein we understand the design, prepare a plan on where and how we would look around for Dory (Bug) and then execute accordingly. During this process we also keep asking (reviews) around (designers & peers) to let us know if we are missing out on anything. So if Dory is somewhere around, there is a chance we may sight her. But since Dory doesn’t think much before acting, that makes her unpredictable. There is always a possibility that we may not find her as per our plan.

My son’s eyeballs zoomed…. THEN?

Then we also do what Marlin & Nemo did i.e. follow “What would Dory do”? My son jumped, "She wouldn’t think twice and be random". Yes! We pick the Dory way and we call it Random Verification. We search randomly everywhere in an unplanned sequence and guess what? The chances that we would find Dory (Bug) increase. To make it more effective, we define weights and constraints to the randomness so as to improve our luck of finding her further. The approach now becomes Constrained Random Verification (CRV).  While following this random pattern we also take a note (coverage) of where all we have visited to avoid repeating same place again and save time. Now we can find her faster. Tracking coverage on top of CRV is called Coverage Driven Verification (CDV). So if we missed finding Dory (Bug) using the Marlin way (Directed verification) we still have an option to find her the Dory way (CRV).

That settled my son for a while till he pointed again saying, “Dad, maybe you should seek help from Bailey, the beluga whale who can find Dory faster than anyone using echo location”. I smirked and told him that we have our Bailey too and we call it Formal Verification. But then, Bailey was dependent on the whale voice between Dory & Destiny, the whale shark without which he couldn’t be of much help. Similarly, in Formal we are dependent on the assertions that connect the tool to the bug in the design. The effectiveness of this approach is purely dependent on the quality of voice (assertions) and the connect (covering all parts of design) between Dory & Destiny. But yes, if that is in place, it is really fast & effective.

Now convinced that his dad would be able to find Dory, my son asked, “So once you have found Dory, what do you do next”?

I laughed and told that we don’t have to find only 1 Dory (Bug). There are many of them and the address and architecture of the institute (new SoC) also keeps changing. So we just keep Finding Dory (Bug)!!!

Disclaimer: "The postings on this blog are my own and not necessarily reflects the views of Aricent"

Sunday, June 12, 2016

Learning Verification with Angry Birds

What do you say when your little one asks you, “Dad! What do you do”? Well I said, “I am an engineer”. For his age, he knew who is a driver, a doctor and a policeman. So the next question was “What does an engineer do”? I pointed him to different man made stuff around to explain him what all an engineer does. As an inquisitive kid he wanted to know if I build them all. That is when I tried to explain different engineering functions building different artifacts. So the question came back as to, “What do you do”? Finally, I told him that, “I find BUGS in the designs”. The next one was HOW? Given that he watched The Angry Bird movie recently & loves to play that game so I picked from there to explain what a verification engineer really does.

Figure 1 : Labeled screenshot of The Angry Birds game 
As in the figure above, the screenshot of the game is called TESTBENCH for us. The target that is seen on the right is called the DESIGN UNDER TEST or DUT in short. Our goal is to hammer the DUT with minimum iterations such that all the BUGS inside it like the pigs above get kicked out. On the left you see a series of angry birds waiting to take the leap. We refer to them as the PACKETS or SEQUENCE ITEMS. They are all from the same base class “angry_birds” i.e. have certain characteristics in common while some different features in each one so as to ensure we hit the DUT differently. We sequence these birds (sequence items) in such a way so as to generate different scenarios to weed out the pigs (bugs). This scenario is called a TEST CASE. The catapult shown is known as the DRIVER in our testbench. It takes the angry bird (sequence item) and throws (drives) it on to the DUT at different points known as INTERFACES of the DUT.  Once the angry bird (sequence item) hits the DUT, there is an inbuilt MONITOR in the game (testbench) that confirms if the flight taken is useful or not & if it is, how much? If the hit resulted in correct outcome the SCOREBOARD gives a go ahead and this leads to the scores that we get and we call it COVERAGE. The high score is the maximum coverage achieved with this test case.  When we are able to kill all the pigs (here bugs) hidden in different parts of the DUT, we are all set to move to another screen i.e. new test case targeting another part of the DUT. Once all tests at a given level pass, we move to the next level which is a little tougher. We can call it moving vertically i.e. block to subsystem to SoC/Top OR moving horizontally within a given scope i.e. more complex test scenarios or stress tests. Usually when we have passed all levels, by that time another version of the game is released and we move to that one i.e. next PROJECT.

After explaining it to my son, I felt he would be fascinated with my work. He thought about it and said, “Dad, so you don’t really work, you go to office and play”!!!

All I could tell him was, “Become a Verification Engineer and you can play too at work”!!!

Disclaimer: "The postings on this blog are my own and not necessarily reflects the views of Aricent"

Monday, May 23, 2016

.....of Errors & Mistakes in Verification

Miniaturization of devices has led to packing more functionality on the given slice of silicon. An after effect of that is heating of the device due to increased power consumption and discovering innovative ways of cooling off these components. As electronics adopted wireless, the concern on power came to forefront as, who wants to recharge the battery every second hour. Different techniques have been adopted since then to address this growing concern. One such technique is letting parts of silicon go to into hibernation and trigger a wake up when needed. My hibernation from blogging was no different except that though I received many pokes during this time probably the trigger wasn’t effective enough to tantalize the antennas of the blogger in me. It was only during a recent verification event hosted by Mentor Graphics when my friend Ruchir Dixit, Technical Director – India at Mentor Graphics introduced the event with an interesting thought touching the basics of verification. The message completely resonates the idea of this blog of exploring verification randomly but rooted on basics and I took it as a sign to get the ball rolling again. To start with, I am sharing the thoughts that actuated this restart. Thank you Ruchir for allowing me to share the same.


Source: Slides from Ruchir Dixit - 'Verification Focus & Vision' presented at Verification Forum, Mentor Graphics, India

Before we unfold the topic further have you ever thought as to why computers only spell out ERRORS & not MISTAKES?

Let’s start with understanding the basic difference between an error & a mistake. A mistake is usually a choice that turns out to be wrong because the outcome is wrong. Mistakes are made when a free choice is made either accidentally or performance based but can be prevented or corrected. An error, on the other hand, is a violation of a golden reference or set of rules that would have lead to a different action and outcome.  Errors typically are a result of lack of knowledge and not choice. That is the reason that computer doesn’t make mistakes and only throws error on screen when unable to move forward on a pre-defined set of actions or sees a violation to them. And that is again a reason why you see Warnings & Errors from our EDA tools and not Mistakes :) Machines don’t make mistakes… we do!

Now talking about verification, the sole reason of why we verify is BUGS! And the source of these BUGS are the ERRORS & MISTAKES committed as part of code development.

Mistakes as we understood earlier is resultant of a free choice. While no one wants to make a bad choice, still this creeps into the code due to distractions or coding in a hurry. To prevent or correct such mistakes it is the basic discipline one needs to follow and that is where the EDA tools come to rescue in assisting you to make the right choice.
 
Errors typically happen due to ignorance about the subject or partial knowledge leading to wrong assumptions. This could further find it roots in incomplete documentation or incorrect understanding of the subject. Given that documentation & the resulting conclusions are more subjective it is hard to define the right way to document anything. The only way to minimize errors is to prevent them from occurring by defining clear set of rules that need to be followed and that is where ‘Methodology’ comes into picture. A classic example of the same is having a template generator for UVM code to ensure the code is correct by construction & integrates seamlessly at different levels. Having coding guidelines is another way to reduce errors. Uncovering the rest of the errors is where the tests become important and unless we stimulate that scenario we may not know what & where the error is.

So while errors & mistakes are unavoidable, it is the deployment of the right set of methodologies and tools that leads to a bug free silicon …. In time…. Every time!

After writing this post, I was tempted to say that ‘To ERR is HUMAN and to FORGIVE or VERIFY is DIVINE! 

But then that would be a MISTAKE again :)

Happy Bug Hunting!!!


Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Sunday, October 18, 2015

The magical chariot in verification!

As the holiday season kicked off in India, the antennas of my brain tickled to intercept between what is going around and relate it to verification. In past, I have made an attempt to correlate off topic subjects with verification around this time every year. Dropping the mainstream sometimes helps as it gives you a different perspective and a reason to think beyond normal, to think out of the box, to see solutions in different context and apply it to yours, in this case verification! The problem statement being – driving verification closure with growing complexity and shrinking schedules.
 
Before I move forward, let me share the context of these holidays. This is the time in India when festivities are at their peak for few weeks. Celebration is in the air and the diversity in the culture makes it even more fascinating. People from all over India celebrate this season and relate it to various mythological stories while worshipping different deities. The common theme across is that in the war of good and evil, good prevails finally! What is interesting though are the different stories associated with each culture detailing these wars between good and evil. In the process of growth of the evil and the evolution of good to fight it, both tend to acquire different weapons to attack as well as defend. And when the arsenal at both ends is equally equipped, the launch-pad becomes a critical factor in arriving to a decision. Possibly, that is another reason why different deities ride different animals and some of these stories talk about those magical chariots that kind of made the difference to the war.
 
So how does this relate to verification?
 
As verification engineers our quest with bugs amidst growing complexity has made us acquire different skills. We started off with directed verification using HDLs/C/scripts and soon moved to Constrained random verification. Next we picked different coverage metrics i.e. functional, code coverage and assertions. As we marched further, we added formal apps to take care of the housekeeping items that every project needs. Almost a new tool/flow keeps adding every couple of years in line with the Moore’s law J. And now if we look back, the definition of verification as a non-overlapping concern (functional only) in the ASIC design cycle few decades ago is all set to cross roads with the then perceived orthogonal concerns (clock, power, security and software). While we continue to add a new flow, tool or methodology for each of these challenges that are rocking the verification boat, what hasn’t changed much in all these years is the platform that the verification teams continue to use. Yes, new tools and techniques are required but are these additions bringing the next leap that is needed or are they just coping up with the task at hand? Is it time to think different? Time to think beyond normal? Time to think out of the box? And if YES what could be a potential direction?
 
This is where I come back to the mythological stories wherein when the arsenal wasn’t enough; it was the magical chariot that did the trick! Yes, maybe the answer lies in bringing the change in the platform – our SIMULATORS – the workhorse of verification! Interestingly, the answers do not need to be invented. There are alternate solutions available in form VIRTUAL PROTOTYPING or using HARDWARE ACCELERATORS/EMULATORS for RTL simulations. Adopting these solutions would give an edge on both the bugs causing menace as well as the competition! And for those who think it is costly to adopt, a lost market window for the product could be even costlier!!!
 
As Harry Foster mentioned in his keynote at DVCon India 2015 – It’s about time to bring in the paradigm shift from “Increasing cycles of verification TO maximising verification per cycles”. He also quoted Henry Ford, the legend who founded Ford Motor Company and revolutionized transportation and American industry.
 
 
On that note, I wish you all a Happy Dussehra!!! Happy Holidays!!!


If you liked this post you would love -
Mythology & Verification
HANUMAN of verification team!
Trishool for verification
Verification and firecrackers

Friday, September 25, 2015

DVCON India turned 2!!!

Nothing beats nurturing a seed, an infant, an idea or an EVENT; watching it grow and clocking milestones of its achievements. For many of us, the bright sunny day of Sept 10 bought the same feeling. Yes, this month DVCon India turned 2!
 
Sponsored by Accellera, the conference expanded to India last year. An excellent platform for the design and verification community working at IP, SoC and System level to discuss problems, alternate solutions and contribute to standards. The ecosystem today has multiple EDA driven forums showcasing the right and optimal usage of their respective tools. DVCon being vendor neutral focuses on the need for standards in languages and methodologies to overcome the challenges introduced by rising complexity while emphasizing the right way of applying these standards.
 
History of DVCon
The history of DVCon can be traced back to late 80s when VHDL users met twice a year under the name VUG. By early 90s, it became an annual event called VIUF. Around the same time Verilog users also gathered annually for IVC. In the late 90s, these two events joined hands for HDLCon. In 2003, it was re-branded as DVCon. Based on these facts, DVCon US actually has been serving the community for 25+ years. In 2014, it expanded globally to India & Europe.
 


The 2 day conference was held at Leela Palace, Bangalore on Sept 10-11 2015. Riding on the success of DVCon India 2014, this year was planned to be bigger and better!!! Change in venue, modified program for higher interaction among the participants, addition of Gala dinner and higher quality of the content were the key highlights of this year’s event.  The program was put together keeping in mind the 4Cs : Contribute, Collaborate, Connect & Celebrate - a clear reflection of spirit of DVCon.
 
DAY 1
 
Packed hall with ~600 participants witnessing the lamp lightning ceremony was a clear indication of the enthusiasm that was set to unfold. Yours truly opened the DAY 1, introducing the program underlining the message that DVCon is all about active participation! Harry Foster from Mentor Graphics delivered the opening keynote 'From Growing Complexity to Faster Horses' citing interesting facts about the trends in design and verification. Vinay Shenoy shared an excellent insight as part of the invited keynote 'Perspective on Electronics Ecosystem in India' covering the history and initiatives under ‘Make in India’ campaign. Rest of the day kept everyone busy with invited talks from subject matter experts, panels on upcoming technologies and tutorials around standards. The exhibitors kept the crowd involved all throughout sharing potential solutions to the challenges faced. Having drenched with a rich rain of technical content throughout the day, it was time for some fun in the evening. The crowd came together celebrating 10 years of System Verilog as a standard and IEEE standardization of UVM. Amidst applauses, pranks, music, dance and illusions, the day 1 concluded with tweets and chirping over dinner & drinks.
 
DAY 2
 
An extended DAY 1 didn’t stop the participants to change the gears back to technical on DAY 2 with Ajeetha Kumari opening the day followed by Dennis Brophy sharing an overview on Accellera. Manoj Gandhi from Synopsys delivered the opening keynote 'Propelling the Next Generation of Verification Innovation' discussing how design and verification challenges have progressed and the need of the hour. Atul Bhatiya took the stage next as an invited keynote speaker talking about 'Opportunities in Semiconductor Design in India', encouraging the audience to envision and jump where the ball would be rather than running after it. Rest of the day hosted different tracks on papers and posters shortlisted by the Technical Program committee. By the evening, overwhelmed with the discussions, solutions and networking opportunities, the junta assembled again to appreciate the efforts put in by members of the DVCon India committees and congratulate the winners of Best Paper & poster awards!
 
As the Day 2 concluded, the team that put in stretched hours for almost a year was overjoyed with the grand success of the event. Those relentless efforts paid well in bringing up the conference to the next level. Yes! The nurturing all these days, witnessing the growth and marking the achievement of DVCon India 2015 was all worth it!
 
Other posts on DVCon India -
 

Tuesday, August 11, 2015

101 with Richard Goering : The technical blogging guru

Richard Goering - retired EDA editor
The digital world has connected people across geographies without in person meeting or talking. It is interesting to see the cross pollination of ideas, thoughts and mentoring that travels across boundaries flying on the wings of this connected world. The bond developed when connected on these platforms is no less than a real one. I happen to have a similar bond with Richard Goering as a religious follower of his technical articles for more than a decade. So when Richard announced his retirement, I requested him for an interview to be published on this blog. Humble as he is all these years, he accepted this request and what follows is a short interview with the blogging guru whom I admire a lot for his succinct yet comprehensive posts all these years.
 
Q: Richard, please share a brief introduction to your career?
 
I have always been a writer. I graduated from U.C. Berkeley with a degree in journalism in 1973. In 1974, living in what was to become Silicon Valley, I worked for a long-dead publication called Northern California Electronics News. I wrote an article that described electron beam lithography as the “next big thing” in semiconductor manufacturing. Today this technology is still emerging.
 
In the early 1980s I was a technical writer in Kansas City, Missouri for a company that made computer-controlled bare board testers. I took classes at the University of Missouri in Fortran, Pascal, and assembly language. I still remember going to the campus computer centre with a stack of punched cards, hoping that one error wouldn’t keep the whole program from compiling.
 
In 1984 I joined the staff of Computer Design magazine, and wrote several articles about test. Shortly afterwards I was asked to go cover a new area called “CAE” (computer-aided engineering). This was, of course, the discipline that became “EDA” and I have written about it ever since. I was the EDA editor for Computer Design (4 years) and then for EE Times (17 years). I worked for Cadence, primarily as a blogger, for the past 6 years.
 
Q: When did you realize it’s time to start blogging and why?
 
I actually had a blog during my final years at EE Times, which ended in 2007. At Cadence I wrote the Industry Insights blog. Today there are few traditional publications left, especially in print, and it appears that blogs are a primary source of information for design and verification engineers.
 
Q: What are the three key disruptive technologies you observed that had a high impact on the semiconductor industry?
 
From an EDA perspective, the most significant change was the move from gate-level schematics to RTL design with VHDL or Verilog. This move provided a huge leap in productivity. It also allowed verification engineers to work at a higher level of abstraction. Looking more closely at verification, there was a shift from directed testing to constrained-random test generation. This came along with coverage metrics, executable verification plans, and languages such as “e” from Verisity. I think a third disruptive technology is emerging just now – it’s the importance of software in SoC design, and the need for software-driven verification
 
Q: When did you start hearing the need for a verification engineer in the ASIC design cycle?
 
I think this goes back many years. Most chip design companies have separate verification teams. Nowadays there’s a need for design and verification engineers to work more closely together, and for designers to do some top-level verification, often using formal or static techniques.
 
Q: Please share your experiences with the evolution of verification?
 
At EE Times, I wrote about many new verification companies and covered key product announcements. At Cadence I was more focused on Cadence products, but I continued to cover DVCon and other verification related industry events. 
 
Q: Do you believe that today verification accounts for 70% of the ASIC design cycle efforts?
 
I think we must be very careful with statements such as these. The question is, 70% of what? Are we looking at the entire ASIC/SoC design cycle, from software development through physical design? Or are we considering just “front end” hardware design? Are we talking about block-level verification or looking at the whole SoC and the integration between IP blocks? The 70% claim is about marketing, not engineering.
 
Q: What are the key technologies to look forward for in near future?
 
I think you’re going to see software-driven verification methodologies that employ “use case” testing. The idea here is to specify system-level verification scenarios that involve use cases, and to automatically generate portable, constrained-random tests. The tests are “software driven” because they can be applied through C code running on embedded processor models. Another emerging concept is the “formal app.” A formal app is an automated program that handles a specific task, such as X state propagation. Today most providers of formal verification offer formal apps.
 
Q: What is that you would miss about our industry the most?
 
EDA is a dynamic industry. There is always something new and exciting. I will miss the constant innovation and the spirit that drives it.
 
Q: Words of wisdom to the readers?
 
Don’t be afraid to try something new. Increasing chip and system complexity will drive the need for more productive design and verification methodologies. Job descriptions will change as software, hardware, analog, digital, and verification engineers all need to work more closely together.
 
Thank you Richard for your answers.
 
Your writings have helped in spreading the technology and inspired many of us to do it ourselves too. Wish you happiness and good health!!!

Sunday, April 19, 2015

Moore's law - A journey of 50 years

50 years of innovation! 50 years of quest with complexity! 50 years of Moore's law! Yes, April 19th is an important date for the semiconductor industry. It was on this date in 1965 that a paper was published citing Gordon Moore's observation - the no. of transistors on a given silicon area would double in almost every two years. The observation turned to be a benchmark and later a self fulfilling prophecy that is chanted by everyone whether an aspirant wanting to be a part of this industry or veteran who worked all throughout since the time when the law was still an observation! I myself remember my first interview as a fresh grad where I was asked the definition & implication of this law. It may not be a surprise if there is a survey done on one name that people in this industry have read, heard or uttered the most in their careers and the result would be MOORE unanimously!
The below infographic from Intel would help you appreciate the complexity that we are talking about –


In this pursuit to double the no. of transistors, there were major shifts that the industry experienced. Let's have a look at the notable ones that had a major impact -
Birth of EDA industry – As the numbers grew it was difficult to handle the design process manually and there was a need for automating the pieces. While the initial work in these lines happened in the design houses, it was soon realized that re-inventing the wheel and maintaining proprietary flows without considerable differentiation to end products wasn’t so very wise. This lead to the birth of the design automation industry that today happens to be the lifeline of the SoC design cycle.
Birth of the fabless ecosystem – The initial design houses had the muscle to manufacture the end product while allowing some contract manufacturing for the smaller players. This setup had its own set of issues discouraging startups.  Also, maintaining the existing node while investing in R&D for next gen nodes was unsustainable. It was only in the late 80s when Morris Chang introduced the foundry model that the industry realized fabless was a possibility. Since then, all stakeholders of the ecosystem have collaborated towards realizing the Moore’s law.
Reuse – As the transistors scaled, the turnaround time to design should have increased, but, to keep a check on the same, reusability was adopted. This reuse was introduced at multiple levels. Different consortiums came forward to standardize the design representations & hand offs. Standards helped in promoting reuse across the industry. Next was design reuse in form of IPs. For standard protocols the IPs are reused across companies while for proprietary ones reuse within the organization is highly encouraged. Reuse has played a significant role in continuing the pace that Moore’s law suggests.
Abstraction – When the observation was made, the design were still at transistor level and layouts done manually. Due to the need to sustain the rising complexity, it was realized to move to next level of abstraction i.e. logic gates followed by Register Transfer level where the design is represented in HDLs and synthesized to gates. Today the industry is already talking about a still high level synthesizable language.
Specialization – The initial designs didn’t require a variety of skill set as it is today. Given the evolution of the design cycle and the quantum of responsibility at every stage, there was a need to bring in specialists in each area. This lead to RTL designers, verification engineers, gate level implementation engineers and layout engineers. Today the overall team realizing a design runs into hundreds of engineers with varied skill set for a complex SoC involving EDA, foundry, reuse & abstraction.
Throughout these 50 years, there were many a times when experts challenged the sustainability of Moore’s law. Most of them had a scientific rational endorsing their argument. However, the collective effort of the industry always was able to find out an answer to those challenges – sometimes through science, sometimes through logic and sometimes through sheer conviction!
Long live Moore’s law!

Sunday, March 22, 2015

Is Shift Left just a marketing gimmick?

This year DVCON in US was a huge success hosting close to 1200+ visitors busy connecting, sharing & learning! With UVM adoption rate stabilizing, this year the talk of the event was ‘Shift Left’ – a discussion kicked off as a keynote  by Aart J. De Geus, CEO of Synopsys. The reason for the generated interest is because there are gurus preaching it to be the next big thing and then there are pundits predicting it to be a mere marketing buzzword. In reality, both are correct!

The term 'Shift Left' is considerably new and is interesting enough to create a buzz around the industry. Without the buzz there is no awareness and without awareness, no adoption! However, the phenomenon i.e. squeezing the development cycle aka 'Shift left' for faster time to market has been there for more than a decade.

In the 90s, hundreds of team members worked relentlessly to tape out 1 chip in years & were flown to destinations like Hawaii for celebrating it. Today this is no more heard because every organization or for that matter even the captive centres itself are taping out multiple chips per year. The celebration got squeezed to a lunch/dinner - probably indicating a 'Shift Left' in celebrations too :)

Back in the 90s, the product was HE centric and the so called ASIC design cycle was fairly simple owing to its sequential nature where next stage starts once earlier is done. The industry saw this as an opportunity and started working towards tools & flows that can help bring in efficiency by parallelizing the efforts. Introduction to constrained random verification lead the verification efforts to be parallel to RTL thereby stepping left. Early RTL releases to implementation team helped parallelizing the efforts towards floor planning, placement, die size estimation and package design etc. Reuse of IPs, VIPs, flow, methodologies etc gave further push enabling optimized design cycle. These efforts helped in bringing the first level of the now called 'Shift Left' in the design cycle.

In the later part of the last decade, 2 observations were evident to the industry -
1. The product is no longer HW alone and instead a conglomeration of HW & SW with the later adding further delays to the overall product development cycle.
2. Efficiency achieved out of parallelism is limited by the longest pole of the divided tasks. In ASIC design cycle, Verification happens to be gating further squeeze in the cycle.

This became the next focus area and today given that the solutions have reached some level of maturity the buzz word that we call ‘Shift Left’ finally found an identity! The key ideas that enable this shift left include –

- Formal APPS enabling faster targeted verification of defined facets in any design. The static nature of the solution wrapped up in form of APPS has tickled the interest in the design community to contribute to verification productivity by cleaning up the design before mainstream verification starts. This leads to another buzzword DFV - ‘Design for Verification’.

- FPGA prototyping has always been there but each organization was spending time & efforts to define & develop the prototyping board. Today off the shelf solutions give the desired jump start to the prototyping process enabling early SW development once the RTL is mature.

- To improve the speed of verification, hardware accelerators aka emulation platforms were introduced and these solutions opened up gates for early software development even before the RTL freeze milestone.

- Improvement in speed with higher level of abstraction was evident when the industry moved from Gate level to RTL. The next move was planned with transaction level modelling. While high level synthesis is yet to witness mass adoption, its extension resulted in Virtual prototyping platform enabling architecture exploration, HW SW partitioning and early SW development even before the RTL design/integration starts.

In summary, the process of product development cycle is getting refined by the day. The industry is busy weeding out inefficiencies in the flow, automating everything possible to improve predictability and bringing in the required collaboration across the stakeholders for realizing better, faster & cheaper products. Yes, some call it the great SHIFT LEFT!