Tuesday, October 13, 2009

Who should write Assertion, Designer or Verification Engineer?

Dear Readers,

Who should write Assertion, Designer or Verification Engineer?

The short answer is both. Generally, a designer will write assertions that go in the RTL, while the verification engineer will write assertions that are external to the RTL. For example, designers write assertions that are embedded in the RTL, while the verification engineer writes assertions on the interfaces of the design-under-test (DUT) and creates coverage points, checkers and monitors for the testbench. Verification engineers may also add assertions to fill any holes in the RTL checks left by the designer.

Controlling Assertions:

In any given DUT, there can be many assertions each consisting of one or more evaluation threads. Sometimes it is necessary to enable or disable certain sets of assertions. For example, during reset, all assertions not related to reset must be disabled, and during exception testing, the assertions related to the condition being violated must be disabled.

This means that a fine-grained mechanism must be defined for assertion control. One way to do this is to group assertions logically into categories. One or more categories can then be enabled or disabled at a time.

There are many different mechanisms available for assertion control. Each of the mechanisms has different trade-offs. $asserton/$assertoff system tasks are global mechanisms and can be used to control all assertions or specific named assertions. Compiler directives are compile time directives and allow assertions to be enabled or disabled at compile time. They do not allow assertions to be enabled or disabled dynamically during simulation.

SV has many strong construst and features through which engineer can confident and can say verification is nearly finished. But stil there are many questions comes to my mind are : 1. How do you ensure that there are enough assertions written? 2.How do you say that coverage what is written by you is 100% correct and covering correct behaviour or not?

I am eager to have some inputs on these questions, please share your views.

Happy Reading,
ASIC With Ankit

4 comments:

Jay Panchal said...

For Question2:
==============
I think, in most cases, you could judge your verification environment(and so the functional coverage) based on the code coverage that you get. If code coverage is in 90s, you can be little bit relaxed, that you have covered most of the corner cases. But there are still few corner cases for you to look upon. When both functional and code coverage seem to merge with some deviations, you are done. For this, guidance from designers could help a verification engineer to get confident in his verification environment, little bit quickly :)... (Designers should be present in the company for that ;) )...

Ankit Gopani said...

Jay,
Thanks for your response. I think from the code coverage you will ensure that there are no holes in Desing, but the functionality wise we should always rely on functional coverage and that is again written by verificaion engineer. So in many cases I have seen, interpretation itself is wrong and coverage coded on that interpretaion. But functionality wise it is wrong and during simulation it may get covered wrongly. So it may possible that even if your funcational coverage is 100%, you may wrong for some corner cases that migh have missed or you might have coded wrongly....!

Code coverage will give you more confident on design not on your test bench. So how will you make sure that whether your test bench is doing exactly what is suppos to do? So functional coverage will answered this question. I am not saying it won't give confidence on desing, it will.
But question is still there as functional coverage is again coded by verification engineer :-) So I would say, we can claim 100% functional coverage and code coverage but we can not garrenty that design is 100% bug free. What do you say? My point is functional and code coverage gives us confidence that we are on right trak and verification is near to finish, not 100%.

Jay Panchal said...

I think, you are true to some extent. but here is what I feel:

(1) Code coverage is calculated based on the test-cases that a Verification Engineer would write. So my point was; let's say, if your functional coverage is around 100% but the code coverage report says that the code coverage in DUT is just 60%-70%, it means that the verification environment has been manipulated to needs and it is not really a true test environment [This is true in most of the cases, unless designer has really written a crap code :)]

(2) When we can assure that design is 100% bug free???? The answer to this question would be according to me.... may be never ... that's why we have so many releases of each product. As end-users use the product, new corner case bugs come-up. But the Verification Environment could surely reduce these post-silicon bugs, if the Verification Engineer has sound knowledge of protocol and insight of finding corner case bugs. That's what comes with experience.
And mostly, we can combine assertions and coverage to ensure correct functionality of DUT. And thereby making maximum efforts to provide bug-free device to the end user.

(3) I don't think that Code coverage gives more confident to design and not a verification environment. I believe, they both go hand-in-hand. That's why, I said - "When both functional and code coverage seem to merge with some deviations, you can be little bit relaxed" :)

Ankit Gopani said...

Thanks Jay for your good inputs,

I fully agree on your all points which you have mentioned above. I am also fully agree that "we can not say that design is 100% bug free". And thats the reason we are doing so many releases for design with fixes on bugs. But you are absilutely correct that we as a verification engineer could reduce the posibilities of post-silicon bugs. And to reduce thease bug posibilities we are using assertions and coverage.

Your third point is also correct, as code coverage will say ilolate areas of untested design, so indirectly it will say how efficent verifiation environment is. So there could be some holes in desing which has not been exercised with our verification environment, then we need to write some specific test to cover those ilodated pleace of code.

These way we can make sure that we are near to done, but still not done with verification :-) :-)

Bottom line is : "We can not say that design is 100% bug free but we can make sure that there would not be any bug (but still there may or may not be :-))"

-Ankit Gopani