Wednesday, February 18, 2009

Automatic Coverage Closure – my perspective

Recently EDA tools are emerging in the area of “Automatic Coverage Closure” that promise a new level of automation in CDV/MDV/any_other_Buzz_word_Driven_Verification process. A significant name in this arena is nuSym, a relatively new EDA player. There have been few good reviews about them @ Deepchip.com.

http://deepchip.com/items/0479-05.html

http://deepchip.com/items/0473-06.html

http://deepchip.com/items/dvcon07-06.html

And another one @ SiliconIndia:

http://www.siliconindia.com/magazine/articledesc.php?articleid=DPEW289996212

 

And very recently on VerifGuild:

http://www.verificationguild.com/modules.php?name=Forums&file=viewtopic&t=3102

I like Gopi’s post/comment b’cos I have the same opinion about CRV (Constraint Random Verification) – it catches scenarios/bugs that you didn’t envision – either via constraints or coverage (or otherwise). Now if we fool ourself by going behind “only the existing/identified coverage holes” we fall into a trap. This is inline/insync with what Sundaresan Kumbakonam of BRCM ( need his profile? See: http://vlsi-india.org/vsi/activities/dvw05_blr/index.html) shared with me once:

 

Quoting Sundaresan:

 I don’t believe much in the idea of “writing functional coverage model” and then tweaking a constraint here-or-there, or writing a “directed test” for it to fill the hole.

Coming back to my view, I believe some redundancy via randomness/CRV is actually good. In my past verification cycles I have seen design errors due to “repeated patterns” – no big deal, is it?

So where exactly do these ACC tools fit?

Referring back to:

http://www.verificationguild.com/modules.php?name=Forums&file=viewtopic&t=3102

>> whether these tools are only used to reach last few % of coverage goal which is hard to reach ?

I would differ here, they shall be very useful somewhere during the middle phase – neither too early, nor too late. Too early – perhaps we don’t have full RTL and/or functional cov model. Too late – perhaps our focus should be more into “checking” than only coverage (As Nagesh pointed out in VerifGuild). I would like to add that during those last minutes, the coverage shall be taken for “granted” – meaning it is a *must* and not a *nice to have* thing and the focus shall be to look for any failures.

To me a reasonable flow with these ACC tools would be:

  • Run with CRV, measure code coverage. Add checkers
  • Add functional coverage, use CRV again to hit them using the coverage points as “potential trouble spots” than “actual scenarios” themselves. In few cases where in the scenario description is easy to capture using Functional coverage syntax, this is great. IMHO the existing coverage syntax is little too verbose and unusable to a large extent for solid, easy-to-use coverage specification. Specifically the SV syntax overhead of coverage is just too much for me. IEEE 1647 “e” fairs slightly better but that’s a different story altogether. I’m still on the lookout for a higher level coverage specification language.. (matter for another blog post anyway).
  • Once the RTL and the coverage model is reasonably stable, use ACC regularly as a “sanity” test on every interim RTL release. I believe ACC has a HUGE potential here – if we can optimize the tests needed to release interim RTL versions, we are saving quality time and enabling faster turn around.
  • Towards the end, enable “plain CRV” (without the ACC bias) and look for “trouble free regression for XX days”.

 

And while speaking to a friend of mine here a while back, he is damn against the idea of using these ACC tools for merely stimulus. He likes the idea of ACC if it can be used to:

  • Fill functional cov holes
  • Code coverage holes
  • Assertion cov misses/holes

A tough ask, but looks like nuSym can handle that – atleast based on the early reviews so far. Also reading their whitepaper on “intelligent verification”, they do a path tracing that enables them to systematically target code coverage without getting into Formal world – cool idea indeed! Kudos to nuSym folks (some of them my ex-colleagues BTW).

And on the application of these ACC tools to the poor, non-CRV/CDV folks – there is light at the end of the tunnel, if you read nuSym’s paper. We at CVC also have ideas on how to use this for a highly configurable IP verification with plain vanilla verilog/task based TBs. We need to prototype it before we can discuss it in detail though.

 

Anyway, good topic for otherwise a downturn mood.

 

More to follow.

Srini

P.S. Sorry for the “random” rambling, after all we are talking of “random verification” :-)

4 comments:

Alex Seibulescu said...

I've just randomly rambled something on this topic on Verification Guild, here's the link so I don't have to say the same thing which will lead to no coverage improvement:

http://verificationguild.com/modules.php?name=Forums&file=viewtopic&t=3102

Anonymous said...

Hi Srini,

I have few comments ….

>> I believe some redundancy via randomness/CRV is actually good.

I believe even the redundancy must be calibrated based on it's contribution to verification. I mean what’s the point in repeating if it doesn't add value. One good measurement point is the rate of bug discovery. If increased redundancy is not affecting the rate of finding bugs then probably that redundancy isn’t good to increase.

>> Now if we fool ourself by going behind “only the existing/identified coverage holes” we fall into a trap.

In my opinion Functional and Code Coverage Goals must be treated as minimum verification goal. Treating functional coverage as sign-off criteria is obviously a mistake.

A coverage convergence tool must assist us in achieving minimum verification goal “ASAP”. So that we can focus more on writing more meaningful constraints, coverage, assertion, properties and of course review all of these.

Coverage Convergence as a technology can be used at all stages.

Early Verification - ACC can generate test stimulus to achieve higher code coverage goals. This should be much more than lint checks. I have some RTL and no tests, still I want to make the preliminary release to verif and backend team. If I have some BFMs and checkers available the vectors suggested by ACC would achieve some meaningful verification.

Middle Verification - By now I would have 70-80% of my BFMs, checkers, assertions, properties, constraints, coverage models, scoreboards in place. Again ACC would help me in closing on coverage goals for completeness. And now I can focus more on randomization, constraints etc. The key is that I don't spend much time in analyzing the coverage holes and it's closures given that all coverage goals are achievable. And if the goal is not achievable then I will not spend weeks in hunting a non-real hole which can never be filled.

Late Verification - I would be doing mostly performance related bug fixes or timing related fixes. If the RTL changes doesn't result in coverage model changes, I can quickly validate my changes by running the minimum set of vectors suggested by ACC tool to achieve a quick verif closure.

I haven't evaluated any tools on coverage convergence. However in my opinion the real convergence tool would be able to achieve below –

1. Code coverage closure - Conclusively prove or disprove that certain branches are/not achievable or may be read only register bit will not give a strict toggle coverage no matter what the stimulus can be. Dead code is really dead. No stimulus can make it alive etc etc.
2. Functional Coverage Closure – Bias randomization such that all holes are filled. Or prove that a coverage hole is a potential problem with constraints.
3. Assertion Coverage Closure – Achieve assertion coverage closure. Bias stimulus such that positive assertions can fire or conclusively prove that certain assertions cannot fail no matter what stimulus within given constraints are applied.
4. Property Coverage closure – Achieve closure on property coverage. Either drive stimulus such that they are covered or prove that they are not provable.
5. Minimal Regression List – Result in a minimal vector set which can then be used to quickly verify RTL fixes with 100% coverage achieved. And thus speeding up iteration in timing fixes or performance bug fixes. Please note that this regression list will not stop the verification process its merely speeding up the frontend-backend handshake.

Thanks
Rakshit

Anonymous said...

Rakshit,

Good points!

>>A coverage convergence tool must assist us in achieving minimum verification goal “ASAP”. So that we can focus more on writing more meaningful constraints, coverage, assertion, properties and of course review all of these.

Exactly! You want to spend your time developing assertions, checkers, functional coverage groups, etc. not tweaking constraints on the random variables to try and get the simulation to take a certain path in the design. Tools should do that.

>>Early Verification - ACC can generate test stimulus to achieve higher code coverage goals. This should be much more than lint checks. I have some RTL and no tests, still I want to make the preliminary release to verif and backend team. If I have some BFMs and checkers available the vectors suggested by ACC would achieve some meaningful verification.

Absolutely, and if you think about it even before you have the BFMs and checkers, you can STILL get some value out of the ACC tool by looking at the lines that were NOT covered even though you had a fully random stimulus applied to your module.

>> The key is that I don't spend much time in analyzing the coverage holes and it's closures given that all coverage goals are achievable. And if the goal is not achievable then I will not spend weeks in hunting a non-real hole which can never be filled.

For the holes it couldn't fill, The ACC tool should be able to give you some feedback as to why that is (constant expressions, unsatisfiable conditions, etc.)

I think you hit the nail on the head with the final 5 points you made about what an ACC tool should do, but keep in mind that such a tool would not be able to for instance, "prove" that an assertion can never fail, that's what formal property checkers do. However, the ACC tool will try its darnedest to try and make it fail and if it cannot do it, you can already have a high degree of confidence that the property holds. You also get a level of scalability that formal tools will never achieve and a familiar, simulation based debugging environment when a bug is found.

Regards,

Alex

Srinivasan Venkataramanan said...

Another potential user for this technique and his query @:

http://verificationguild.com/modules.php?name=Forums&file=viewtopic&p=14651#14651

Srini