Sunday, February 22, 2015

DVCON : Enabling the O's of OODA loop in DV

Today the traditional methods of verifying the design strangle with complexity seeking new leaps and bounds.  We discussed in our last post about why a new pair of eyes was added to the design flow and since then we continue to not only increase the numbers of those pairs but also improve the lenses that these eyes use to weed out bugs. These lenses are nothing but the flows and methodologies we have been introducing as the design challenges continue to unfold. Today, we have reached a point in verification where ‘one size doesn’t fit all’. While the nature of the design commands a customized process of verifying it, even for a given design, moving from block to sub system (UVM centric) and to SoC/Top level (directed tests) we need to change the way we verify the scope. Besides the level, there are certain categories of functions that are best suited for a certain way of verification (read formal). Beyond this, modelling the design and putting a reusable verification environment around it to accelerate the development is another area that requires attention. With analog sitting next to digital on the same die, verifying the two together demands a unique approach. All in all, for the product to hit the market window in the right time you cannot just verify the design but you need to put a well defined strategy to verify it in the fastest and best possible fashion.

So what is OODA loop? From Wikipedia, the phrase OODA loop refers to the decision cycle of observe, orient, decide, and act, developed by military strategist and USAF Colonel John Boyd. According to Boyd, decision making occurs in a recurring cycle of observe-orient-decide-act. An entity (whether an individual or an organization) that can process this cycle quickly, observing and reacting to unfolding events more rapidly than an opponent, can thereby "get inside" the opponent's decision cycle and gain the advantage. The primary application of OODA loop was at the strategic level in military operations. Since the concept is core to defining the right strategy, the application base continues to increase.

To some extent this OODA loop entered the DV cycle with the introduction of Constrained Random Verification paired with Coverage Driven verification closure. Constrained random regressions kicked off the process of observing the gaps, analyzing the holes, decide if they need to be covered and acting on refining the constraints further so as to direct the simulations to cover the holes. 

Today, the need for applying OODA loop is at a much higher level i.e. to strategize on the right mix of tools, flows and methodologies in realizing a winning edge. The outcome depends highly on the 2 O’s i.e. Observe & Orient. In order to maximize returns on these Os, one must be AWARE of –
1. The current process that is followed
2  Pain points in the current process and anticipated ones for the next design
3. Different means to address the pain points

Even to address the first 2 points mentioned above, one needs to be aware of what all to observe in the process and how to measure the pain points. While the EDA partners try to help out in the best possible way enabling the teams with the right mix, it is important to understand what kind of challenges are keeping the the others in the industry busy and how are they solving these problems. One of the premiere forums addressing this aspect is DVCON! Last year, DVCON extended its presence from US to India and Europe. These events provide a unique opportunity to get involved and in the process - connect, share and learn. Re-iterating the words of Benjamin Franklin once again –

Tell me and I forget.
Teach me and I remember.
Involve me and I learn.

So this is your chance to contribute to enabling the fraternity with the Os of OODA loop!

Relevant dates -
DVCON US – March 2-5, San Jose
DVCON India – September 10-11, Bangalore
DVCON Europe – November 11-12, Munich

Other posts on DVCON at siddha'karana –


Sunday, February 8, 2015

Designers should not verify their own code! REALLY?

Around 2 decades back, the demands from designs were relatively simple and the focus was on improving performance. Process nodes had longer life, power optimization wasn’t even discussed and time to market pressure was relatively less given that the end products enjoyed long life. In those days, it was the designer who would first design and later verify his own code usually using the same HDL. Over the years, as complexity accelerated, a new breed of engineers entered the scene, the DV engineers! The rational given was that there is a need for an independent pair of eyes to confirm if the design is meeting the intent! Verification was still sequential to the design in early days of directed verification. Soon, there was a need for constrained random verification (CRV) and additional techniques to contain the growing verification challenge. The test bench development now started in parallel to the design, improving the size & need of verification teams further. With non HDLs i.e. HVLs entering the scene the need for DV engineers was inevitable. All these years, the rational of having an additional pair of eyes continued to be heard to an extent that we have started believing that designers should not verify their own code.

In my last post I emphasized the need for collaboration wherein designers and verification engineers need to come together for faster verification closure. Neil Johnson recently concluded in his post on designers verifying their own code. My 2 cents to whether designers should not or shall I say do not verify their own code –

To start with, let’s look at what all involves verification? The figure adjoining is a summary of efforts spent in verification based on the study conducted by Wilson research in 2012 (commissioned by Mentor). Clubbing some of the activities it is clear that ~40% of the time is spent in Test planning, Test bench development and other activities. The rest ~60% of the effort is spent on Debug & creating + running tests. The DUT here can be an IP or SoC. 

When an IP is under development or SoC is getting integrated, the DV engineers would be involved into the 40% of the activities mentioned above. These are the tasks that actually fall in line with the statement of additional pair of eyes validating design intent. They need to understand the architecture of the DUT and come up with a verification plan, develop verification environment and hooks to monitor progress. At this level, the involvement of the design team starts with activities like test plan review, code coverage review, inputs to corner cases and tests of interest. 

So, once the design is alive on the testbench, do the designers just sit & watch the DV team validate the representation of spec developed by them? NO!

Debugging alone is a single major activity that consumes an equal amount or sometimes more efforts from the designers to root cause the bug. Apart from it, there is significant involvement of the design team during IP & SoC verification.

For IPs, CRV is a usual choice. The power of CRV lies in automating the test generation using the testbench. A little additional automation enables the designers to generate constrained tests themselves. Assertions are another very important aspect in IPs. With introduction of assertion synthesis tools, the designers work on segregating the generated points into assertions or coverage. For SoCs, apart from reuse of CRV, directed verification is an obvious choice. Introduction to new tools on graph based verification help designers to try out tests based on the test plan developed by the DV engineer. Apart from this, corner case analysis and usecase waveform reviews are another time consuming contributions put in by designers for verifying the DUT.

Coming back to the rational on having an independent pair of eyes verify the code, the implication was never that the designers shall not verify their own code. Infact there is no way for the DV team to do it in a disjoint fashion. Today the verification engineer himself is designing a highly sophisticated test bench that is actually equivalent to a designer’s code in complexity.  So it would be rather apt to say that it is the two designs striking each other to enable verification under the collaboration between design & verification teams!

What is your take on this? Drop a note below!

Sunday, January 25, 2015

The 4th C in Verification

The 3C’s of verification i.e. Constraints, Checkers & Coverage have been playing an important role enabling faster verification closure. With growing complexity and shrinking market windows it is important to introduce the 4th C that can be a game changer in actually differentiating your product development life cycle. Interestingly the 4th C is less technical but highly effective in results. It is agnostic to the tool or flow or methodology but if introduced and practiced diligently would surely result in multi-fold returns. Since verification claims almost 70%of the ASIC design cycle, it is evident that timely sign off on DV would be the key to faster time to market of the product. Yes, the 4th C I am referring to is Collaboration!

UVM demonstrates a perfect example of collaboration within the verification fraternity to converge on a methodology that benefits everyone. Verification today spreads beyond RTL simulations to high level model validation, virtual platform based design validation, analog model validation, static checks, timing simulations, FPGA prototyping/emulation and post silicon validation. What this means is that we need to step out and collaborate with different stakeholders enabling faster closure.

The first & foremost being the architecture team, RTL designers & analog designers who conceive the design and realize it in some or the other form and many a times fall short of accurate documentation. The architecture team can help to a large extent in defining the context under which theverification needs to be carried out thereby narrowing down the scope. With a variety of tools available, the DV teams can work closely with designers to clean the RTL removing obvious issues that otherwise would stall simulation progress. Further, assertion synthesis and coverage closure would help in closing the verification at different levels smoothly. Working with analog designers can help tune the models and their validation process wrt the circuit representation of the design. This enables faster closure of designs that see increased scope of analog on silicon.

Next are the tools that we use. It is important to collaborate with the EDA vendors in not just being the user of the tool but working closely with them in anticipating the challenges expected in the next design and be early adopters of the tools to flush the flows and get ready for the real drill. Similarly, joining hands with the IP & VIP vendors is equally crucial. Setting up right expectations with the IP vendors on the deliverables from verification view point i.e. coverage metrics, test plans, integration guide, integration tests etc. would help in faster closure on SoC verification. Working with VIP vendors to define how best to leverage the VIP components, sequences, tests & coverage etc. at block and SoC level avoids redundant efforts and help in closing verification faster.

The design service providers augment the existing teams bringing the required elasticity to the project needs or take up ownership of derivatives and execute them. These engineers are exposed to a variety of flows and methodologies while contributing to different projects. They can help in introducing efficiency to the existing ways of accomplishing tasks. Auditing existing flows and porting the legacy environment to better ones is another way these groups can contribute effectively if partnered aptly.

Finally the software teams that bring life to the HW we verify. In my last blog I highlighted the need for HW & SW teams to work more closely and how verification teams acts as a bridge between the two. Working closely with the SW teams can improve reusability and eliminate redundancies in the efforts.


Collaboration today is the need of the hour! We need to be open to recognize the efforts put in by different stakeholders from the ecosystem to realize a product. Collaboration improves reuse and avoids a lot of wasted efforts in terms of repeated work or incorrect understanding of intent. Above all, the camaraderie developed as part of this process would ensure that any or all these folks are available at any time to jump in the hour of need to cover for unforeseen effects of Murphy’s law.

Drop in your experiences & views with collaboration in the comments section.

Sunday, January 11, 2015

HW - SW : Yes, Steve Jobs was right!

The start of the year marked another step forward towards the NEXT BIG THING in semiconductor space with a fleet of companies showcasing interesting products at CES in Las Vegas. In parallel, the VLSI conference 2015 at Bangalore also focused on Internet of Things with industry luminaries sharing their views and many local start-ups busy demonstrating their products. As we march forward to enable everything around us with sensors, integrating connectivity through gateways and associated analytics in the cloud, the need for lower form factors, low power, efficient performance and high security at lowest possible cost in limited time is felt more than ever. While there has been a remarkable progress in SoC development targeting these goals, the end product landing with the users doesn’t always reflect the perceived outcome. What this means is, we are at a point where HW and SW cannot work in silos anymore and they need multiple degrees of collaborations.

To enable this next generation product suites, there is a lot of debate going around the CLOSED & OPEN source development. Every discussion refers to the stories of Apple vs Microsoft/Google or iOS vs Android etc. While an open source definitely accelerates development in different dimensions, we all would agree that some of the major user experiences were delivered in a closed system. Interestingly, this debate is more philosophical! At a fundamental level, the reason for closed development was to ensure the HW and SW teams are tightly bound – a doctrine strongly preached by Steve Jobs. From an engineering standpoint, with limited infrastructure available around that time, a closed approach was an outcome of this thought process. Today, the times have changed and there are multiple options available at different abstraction levels to enable close knitting of HW and SW. 

To start with, the basic architecture exploration phase of partitioning the HW and SW can be enabled with the virtual platforms. With the availability of high level models one can quickly build up a desired system to analyze the bottlenecks and performance parameters. There is work in progress to bring power modelling at this level for early power estimation. A transition to cycle accurate models on this platform further enables early software development in parallel to the SoC design cycle. 

Once the RTL is ready, the emulation platforms accelerate the verification cycle by facilitating the testing to be carried out with the external devices imitating real peripherals. This platform also enables the SW teams to try out the code with the actual RTL that would go onto the silicon. The emulators support performance and power analysis that further aid in ensuring that the target specification for the end product is achieved. 

Next, the advancements in FPGA prototyping space finally gives the required boost to have the entire design run faster ensuring that the complete OS can be booted with use-cases running much ahead of the Si tape-out providing insurance to the end product realization.

This new generation of EDA solutions are enabling the bare metal software development to work in complete conjunction with the hardware thereby exploiting every single aspect of the later. It is the verification team that is morphing itself into a bridge between the HW and SW team enabling the SHIFT LEFT in the product development cycle. While the industry pundits can continue to debate over the closed vs open philosophy, the stage is all set to enable HW SW co-development in a given proximity under either of these cases.

As Steve Jobs believed, the differentiation between a GOOD vs GREAT product is the tight coupling of the underlying HW with the associated SW topped with simplicity of use. Yes, Steve Jobs was right and today we see technology enabling his vision for everyone!

Wish you & your loved ones a Happy and Prosperous 2015!

Disclaimer: The thoughts shared are an individual opinion of the author and not influenced by any corporate.

Sunday, December 28, 2014

Top 10 DV events of 2014


www.fastweb.com
The semiconductor industry while growing at a modest rate is at crossroads to define the next killer category of products. The growth in the phone and tablet segment is bound to follow the footsteps of PC industry. IoT (Internet of Things) has been the buzzword but clear definition of the road map is still fuzzy. Infrastructure as always would continue to grow due to the insatiable appetite for faster connectivity and explosion in the amount of data coming in with cloud computing. Irrespective of the product definitions the need for increased functionality on smaller die size with low power and faster time to market at low cost would prevail. To respond to this demand, the DV community has been gearing up in small steps and some of those important ones were taken in 2014. As the stage is getting ready to bid farewell to 2014 and welcome the New Year, let’s have a quick recap of these events in no particular order –

1. UVM 1.2 - A final version of Universal Verification Methodology, was released by Accellera for public review before taking it forward for IEEE standardization. This finally marks an end to the methodology war and resulting confusion. 

2. PSPWG – While UVM has been an anchor for IP verification and reuse, verification of complex scenarios at SoC level is still a challenge particularly when multiple processors are involved and we are looking for reuse across compute platforms. Portable Stimulus Proposed Working Group kicked off this year bringing in stakeholders across the industry to brainstorm on a Unified Portable Stimulus definition as a potential answer to the SoC verification challenge.

3. Formal – Cadence buys Jasper for $170m claiming the largest formal R&D team on the planet. It is expected to help the customers further wherein integration of best of formal technologies from either side would combine and complement the simulation and emulation platforms. 

4. Verification Cockpit – Another interesting shift seen with major EDA vendors is pulling all verification solutions under one umbrella. This includes simulation, emulation, prototyping, formal, coverage, debug and VIPs etc. This is an important step for future SoCs wherein the DV engineer can use the best solution for a given problem to achieve faster verification sign-off.

5. Emulation – The wait was finally over where emulation solutions found wide adoption across the industry. This year witnessed that emulation is no more a luxury but a necessity to meet product schedule by accelerating verification turnaround and enabling the shift left in the product development cycle.  

6. DVCON – This year marked another milestone where in DVCON expanded its reach with Accellera sponsoring DVCON India and DVCON Europe providing an excellent platform for engineers to connect and share their experiments and experiences.

7. AMS – Mentor Graphics acquired Berkeley Design Automation, Inc., bringing in the required flow addressing analog, mixed-signal, and RF circuit verification.  This move is a step to integrate a fairly focused solution with the existing expertise at Mentor to address the next generation verification challenges that would unfold as analog claims more % on silicon. 

8. VIP – This year going with VIP solutions was a no brainer. The pitch of buy vs sell is long gone and industry has embraced third party VIPs with open arms. For a change, non occurrence of any major event in VIP domain was itself an event!

9. X prop – Another solution in verification that created much steam is the X-prop solution from all vendors to enable weeding out the functional X-s from the design at RTL level. This helped many projects reduce the turnaround time spent with GLS.

10. Standards Accellera announced revision of few other standards that include SystemC core language (SystemC 2.3.1) an update to the standard released in 2011 focusing on transaction level modelling, SystemC verification library (SCV 2.0) containing implementation of the verification extensions and Verilog-AMS 2.4 that includes extensions to benefit verification, behavioral modelling and compact modelling.

I hope you had an eventful 2014 with different tools, flows and methodologies. Drop in a comment with what you felt was most interesting in 2014 as we welcome 2015 and the events that would unfold with time.

Wish you all a Happy and Eventful 2015!

Sunday, October 26, 2014

Verification and Firecrackers

Last week, the festive season in India was at its peak with celebrations everywhere. Yes, it was Diwali - the biggest and the brightest of all festivals.
         Source:compucentro.info
Diwali aka Deepawali means a 'row of lights' (deep = light and avali = a row). It’s the time when every house, every street, every city, the whole country is illuminated with lights, filled with aroma of incense sticks, delicious food and sounds of fire-crackers all around. People rejoice paying their obeisance to the Gods for showering health, wealth and prosperity. For many, besides the lights & food, it the crackers that is of supreme interest. The variety ranges from the ones that lighten the floors and the sky to the once that generate a lot of sound. On the eve, while everyone was busy in full ecstasy an interesting observation caught my attention initiating the neurons in my brain to connect it to verification resulting into this post.

The Observation

While enjoying, typically the groups get identified based on age or preference for firecrackers. Usually the younger ones are busy with the stuff that sparkles while the elder ones get a kick with the noisy ones. A few are in transition from light to the sound with some basic items. The image below shows a specific type of firecrackers both bundled and dismantled. 


                                                          Source : Google (www.shutterstock.com)

The novices were busy with the single ones bursting one at a time. A hundred of such pieces would probably take 50 minutes or so almost linearly progressing. If one of them didn’t burst, they would actually probe and see that it worked or threw it. On the other hand the grownups were occupied with the bundled ones that once fired would go on till all of them gave out the resulting light and sound. More the no. of pieces, the longer it would take but the time was much less then releasing individual pieces. It was hard to identify if a 100 pieces bundle actually resulted in 100 sounds or not but overall the effect was better.

The Connection

Observing the above, I could quickly connect it to the learning we have been through as verification engineers. It was directed tests initially wherein we would develop one test at a time and get it going. The total time to complete all the tests was quite linear and for limited number of tests the milestones were visible. Slowly we incorporated randomness into verification where a bundle of tests could run in a regression hitting different features quickly. The run time is dependent on the no. of times the tests run in random regression. Yes, this also results in redundancy but the gains are more especially if the no. of targets is more. 

The Conclusion

As for the firecrackers, the novices playing with individual ones may move to the bundles next year – a natural progression! This is an important conclusion as it demonstrates the learning steps for verification engineers too. Knowing SV & UVM is good but that doesn’t make one a verification engineer. A verification engineer needs to have a nose for bugs and develop code that can hit them fast and hard. This learning is hidden in working on directed tests initially and transitioning to constrained random thereafter. You would appreciate the power of the latter in a better way!

Try extrapolating it more to different aspects of verification and I am sure you would find the connections all throughout. Drop in a comment on your conclusions!

Disclaimer: The intention of the blog is not to promote bursting crackers. There are different views on the resulting pollution and environment hazards that the reader can view on the web and make an individual choice.

Sunday, October 12, 2014

Moving towards Context Aware Verification (CAV)

The race between predictions vs. achievement of Moore’s law has had multi-fold impact on the semiconductor industry. Reuse has come to the rescue both from the design and verification viewpoint to help teams achieve added functionality on a given die size. This phenomenon lead to the proliferation of IP & VIP market. Standardization of interfaces further enables this move by shifting the product differentiation towards architecture and limited proprietary blocks. To enable continued returns and cater to different application segments, the IP needs to be highly configurable. 

Verifying such flexible IP is a challenge; integrating it for a given application and ensuring that it works, further complicates the problem. Given that verification already claims majority of the design cycle efforts, it is important to optimize on the resources, be it tool licenses, simulation platform or engineer’s bandwidth. A focused attempt is required so as to ensure that every effort converges towards the end application i.e. the context in which the design would be used irrespective of the flexibility that the silicon offers. This refers to the subject Context Aware Verification (CAV)!

Verification space has been experiencing substantial momentum on multiple fronts so as to fill the arsenal of the verification engineer with all sorts of tactics required for the challenges coming our way. While these developments are happening independent of each other, they seem to converge towards enabling CAV. Let’s take a quick look at some of these techniques –

Traditionally, test plan used to answer what is to be verified until constrained random entered the scene where how to verify, what to verify and when are we done needs to be addressed by the verification plan. Today verification planner tools enable us to develop executable verification plan with the flexibility to tag the features based on engineer who owns it or based on milestones or based on priorities and above all based on any other custom definition. This customization is useful to club features with reference to a particular context or configuration in which the IP can operate. With this information, the end user of the IP can channelize his efforts in a particular direction rather than wandering everywhere thereby realizing CAV in the larger scheme of things.

Apart from coverage goals that get defined as part of the vplan, there is a need of a subset of tests that would achieve these goals faster. What this means is that the test definition needs to be -
- Scalable at different levels (IP, sub system & SoC)
- Portable across platforms (constrained random for block level, directed tests for SoC verification & validation) 
- Provide a possibility of tagging the tests w.r.t. a given configuration viz a set of valid paths that the design would traverse in the context of a given application. 
Graph based verification is a potential solution to all of this. There is a need to standardize the efforts and to enable discussions in this direction Accellera has initiated Portable Stimulus Proposed Working Group. Once there is a consensus on the stimuli representation, selection of a subset of tests targeting a given configuration would further boost CAV.

With design size marching north, the simulation platform falls short in achieving verification closure in a given time. A variety of emulation (HW acceleration or prototyping) platforms provide an excellent option to speed up this process based on the design requirements. While the verification teams benefit from the simulation acceleration, these boxes also help in early software development and validation. The shift left approach in the industry is enabling basic bring up of OS and even simulating real time apps on these platforms much before the silicon is back. Ability to run the end software on the RTL brings further focus and is an important step towards achieving CAV.

Once all these technologies reach maturity a combined solution would bring in the required focus in the context of the end application. 

As Chris Anderson said – In the world of infinite choice, context – not content – is king!

Our designs with myriad configurations are no different. It is the context that would bring in convergence faster making those products that follow this flow as king!

Saturday, October 4, 2014

DVCON India 2014 : Event Recap!

It’s been a week and we are still receiving messages and emails on the success of the first edition of DVCON in India. The campaign kicked off a few months back and the team put in relentless efforts for the success of this event. There was an overwhelming response from the ESL and DV community once the call for abstracts got announced. Panelists in both the tracks conducted multiple reviews for every paper and after long discussions the chosen ones were intimated for the presentations. The content of the papers were a clear indication of the quality of work and due diligence that is put in by engineers in this geography. Abstracts were submitted from authors outside India too and many traveled to present it on Sept 25-26 2014.

When I reached early morning the corridors, the halls and the stage were all set to kick start the 2 days filled with learning, sharing and networking. While the registrations (paid) prior to the event indicated expected numbers, scores of spot registrations further added the icing to the cake. Interestingly, the halls were full by 9:30 AM and the program kicked off in time with the lamp lighting ceremony and welcome note from Umesh Sisodia – Chair DVCON India 2014. Dr. Walden C. Rhines – CEO Mentor Graphics mesmerized the crowd with his keynote on 'Accelerating EDA innovation through SoC designmethodology convergence'. We couldn’t have asked for a better start! Next, Dr.Mahesh Mehendale - CTO, MCU at Texas Instruments threw an excellent insight on 'Challenges in the design and verification of ultra-low power “more than Moore” systems'. The mood of the conference was all set with the success of the first session. The conference bifurcated from here into ESL and DV tracks with interesting topics getting discussed as part of invited talks. During teak breaks, lunch hour and evening cocktails, the long galleries were full with chit chat between engineers, exhibitors and poster presenters. After lunch, 4 tutorials started in parallel with industry pundits talking about technologies that are into mass adoption and what to look for next. All sessions were jam packed with audience eager to ask questions during the sessions and continue that inquisitiveness till the day wrapped up with informal meetings during the cocktail.

DAY 2 witnessed the same zeal and enthusiasm. It started off with Ajeetha Kumari – Vice Chair, DVCON India 2014 welcoming the audience followed by Dennis Brophy – Vice Chairman Accellera sharing insights on different working groups within Accellera and inviting engineers to actively participate and contribute. Next was the guru Janick Bergeron – Verification fellow, Synopsys talking about 'Where is the Next Level of Verification Productivity Coming from?' The morning session wrapped up with keynote from Mr. Vishwas Vaidya – AGM, Electronics, Tata Motors discussing 'Automotive Embedded Systems: Opportunities and Challenges'. From there on, it was the papers & posters running into 4 to 5 parallel tracks with engineers sharing the challenges they faced and the solutions they discovered as part of this process. Yes there was an award for the same and the judges were none other than the audience themselves casting their vote by end of all sessions. The crowd continued to stay back anxious to know the results and participate in a series of lucky draws that followed. While a wide variety of topics got covered on DAY 2, the results were all in favor of UVM papers clearly confirming of what Dr. Wally Rhines presented as a starting point that India leads in adoption of System Verilog and UVM across the world (based on a latest survey). 

By the Day 2 evening the corridor, the halls and the stage were silent again, maybe exhausted of experiencing the 2 day long sessions, maybe enjoying the recap of the eagerness shown by  400+ delegates, maybe feeling proud of their contribution to the history for hosting the first DVCON in India.  

Many congratulations for those who were able to experience the event and be part of this historical moment. Live tweets for the event can be searched with #DVConIndia on twitter. The proceedings, photographs and videos of the event will be made available soon on the official website www.dvcon.in. If you missed this year, make sure you make it for the next year event which would be surely bigger & better!!!


Yours truly presented an invited talk “...from Nostalgia to Dreams ...the journey of verification” on DAY 1. Stay tuned for a few blogs from that discussion!