Monday, March 19, 2018

Negative Testing in functional verification!!!


Imagine someone on an important call and the mobile device reboots suddenly! The call was to inform that the devices installed at the smart home seems to be behaving erratically with only elderly parents & kids to provide any further details. On booting up, the smartphone flashes that there has been a security breach and data privacy has been compromised. Amidst this chaos, the car’s cruise control didn’t respond to pressing of the pedals!!! Whew!!!.... nothing but one of the worst nightmares in the age of technology we live in! But what if some of it could be true someday? What if the user has little or no idea about that technology?

The mobile revolution has enabled a common man to access technology and use it for different applications. The data from Internet world statistics suggest that internet adoption worldwide has increased from 0.4% of world population in 1995 to 54.4% in 2017. Related data also indicate that a sizable portion of the users are aged & illiterate. The ease of use has potentially driven this adoption further with the basic assumption that devices would be functioning correctly 24x7 even if used incorrectly out of ignorance. The same assumptions are seamlessly getting extended to safety critical domains such as Medical & Auto introducing several unknown risks for the user.

So how does this impact the way we verify our designs?

Traditionally, verification is assumed to be ensuring that the RTL is an exact representation of the specifications. Given that the state space based on the design elements is so very huge, a targeted verification approach covering positive verification has been in practice all throughout. Here, Proof of no bug is assumed to be equal to No proof of bug! The only traces of anything beyond this approach include –

- Introducing asynchronous reset during the test execution to check that the design boots up correctly again.
- Introducing stimulus triggering exceptions in the design.
- Simulating architecture or design deadlock scenarios.
- Playing around with key signals per clock for low power scenarios and reviewing the corresponding design response.


But as we move forward with security and safety becoming key requirements of the design, is this good enough? There is a clear need to redefine the existing approach and bring Negative testing to mainstream! Negative testing ensures that the design can gracefully handle invalid inputs, unexpected user behavior, potential security threats or defects such as structural faults introduced while the device is operational. Amidst shrinking design schedules, negative testing really requires creative thinking coupled with focused effort. 

To start with, it is important to question the assumptions used while defining the verification plan for the design. Validating those assumptions itself can lead to a set of scenarios to be verified under this category. Next, review the constraints applied while generating stimulus to list out potential illegal inputs of interest. Caution should be taken in defining this list as the state space would be large. Reviewing it in the context (Context Aware Verification) of end application would surely help in narrowing down this illegal stimulus set. Further to this, faults need to be injected at critical points inside the DUT using EDA tools or innovative testbench techniques. This is important for safety critical applications where the design needs to respond to random faults and exit properly while notifying about the fault or even correct it. Of course not to forget that appropriate coverage needs to be applied to measure the reach of this additional effort.

As we step into an era of billions of devices empowering humans further, it is crucial that this system of systems is defect free especially when it touches safety critical part of our life. Negative testing is a potential way forward ensuring reliability of designs for such applications. As is always said – 

Better safe than sorry!


4 comments:

  1. Thanks for sharing the valuable points, i think it still works.

    car alarm system installation

    ReplyDelete
  2. Great article sir. I have just one question, Can Artificial Intelligence help a verification expert in modelling the negative testing environment. Is it ever possible?

    ReplyDelete
  3. Sujeth - I shall take the 2nd one first i.e. it is possible. Coming to the HOW part of it, AI would depend upon learning from the past which means the protocol for which you need to model an env needs to have lived for sometime & evolving. In such a scenario, the need for having AI model the test env may actually not arise unless the state space is something which is uncontrollable - but then do you want to design such a state space... for what???

    ReplyDelete
  4. Thanks for the explanation sir. I now have a different perspective towards it.

    ReplyDelete