Tuesday, February 19, 2013

Dare to think beyond UVM for SoC verification

 

Over the past few years, the term “pre-silicon verification” has been quite popular and several technology advancements have helped in solving that puzzle. Some of the biggest contributors have been languages such as e/Specman and SystemVerilog with supporting technologies such as constrained-random verification (CRV), coverage-driven verification (CDV) and assertion-based verification (ABV). All these three technologies when used in unison addressed the challenge at the block or intellectual property (IP)level fairly well. Recently UVM has been developed as a framework to use these languages in the best possible manner to try and keep these technologies scalable to larger designs, such as system-on-chips (SoC). Thanks to the Accellera committee devoting time and effort, UVM is becoming quite popular and the de-facto IP verification approach.

However with SoCs, there are several new challenges in the verification space that threaten to quickly outgrow the current prevalent technologies such as CRV and UVM. One of the key pieces in an SoC is the embedded processor/CPU – either single or multiple of them. Witha transaction-based verification approach such as UVM, typically the CPU gets modeled as a BFM (bus functional model). Some customers term this as a “headless environment” indicating that the “head” of a SoC is indeed the CPU(s). In theory, both the CPU bus and the peripherals can be made to go through grinding transactions via their BFMs.

                                       image

                                                          Figure-1: Sample headless SoC environment

While this certainly helps to get started, soon engineers find it difficult to scale things up with advanced UVM features such as the Virtual Sequencer, Virtual Sequences etc. Even with deep understanding of these, developing scenarios around them has not been an easy task. The length of such sequences/tests, their debug-ability and review-ability have started begging the question of “are we hitting the limits of UVM” - especially in the context of SoCs?

If you thought this is too premature of an assessment, hold-on: the trouble has just started. Anyone involved in an SoC design cycle would agree that the so called “headless environment” is just a start, and would most certainly want to run with the actual CPU RTL model(s) running C/assembly code running on the same.

                                             image

                              Figure-2: SoC environment with actual CPU RTL running C/assembly code

This is a significant step in the pre-silicon verification process. The current UVM focus doesn’t really address this immediate need, thereby forcing users to create a separate flow with a C-based environment around the CPU and hand-coding many of the same scenarios that were earlier tested with “headless UVM” environment. Though the peripherals can still reuse their UVM BFMs, the “head” is now replaced with actual RTL and the co-ordination/synchronization among the peripherals needs to be managed manually – no less than a herculean task. We have heard customer saying “I’ve spent two months in re-creating concurrent traffic, a la the headless environment in the C-based setup”.

The hunt has been on for a higher level modeling of the system level scenarios that can then be run on either a headless or C-based environment – keeping much of the scenarios as-is. Here is where the graphs start to make lot of sense as human beings are well versed with the idea of mind maps (http://en.wikipedia.org/wiki/Mind_map) as a natural, intuitive way of thinking about simultaneous activities, interactions and flow of thoughts.

Breker has been the pioneer in this space by introducing a graph-based approach to functional verification. With graphs, users capture the IP level scenarios as nodes and arcs making it ideal to capture the typical day-in-the-life (DITL) for the IP. Many such IP-level graphs can then be quickly combined to form a SoC level scenario model such as the one below:

                                                      image

                                                                           Figure-3: SoC level scenario model

With a graphical scenario model, TrekSoc (http://www.brekersystems.com/products/treksoc), the flagship SoC verification solution from Breker, can then be asked to either churn out transactions for a headless environment or embedded C-tests for the actual CPU based system with a flip of a switch.

                                 image   

                                               Figure-4: Using scenario models with TrekSoC

This is clearly way beyond current UVM intended goals as UVM is created to solve the problem of VIP reuse and it serves its purpose very well.

Now, with C-tests being auto-generated, the possibilities are endless – they can be reused across the breadth of verification and validation in various platforms starting with simulation, through emulation/prototyping, and all the way up to post-silicon validation.

Bottom line: UVM is serving the very purpose it has been developed for – to create interoperable, reusable VIPs. However a full SoC verification is much more than a bunch of VIPs. It requires next abstraction level models such as the graph based scenario models. Such scenario models can then be compiled by TrekSoC to produce C-tests and/or UVM transactions.

Missed a UVM field macro? Be ready for surprises – and a debug assistant!

Recently a UVM user pondered over the following question:

randomization NOT happening for seq_item variable if uvm_field_* is NOT enabled?

(http://goo.gl/TNSaz)

To appreciate the issue, consider the code snippet as below:

uvm_dbg_1

Since both hdr and pkt_len are declared rand, one expects them to be randomized. Note that one of the `uvm_field_int is commented – to demo the issue.

Now a recipient/consumer of this transaction does a copy/clone at destination. See a code snippet:

uvm_dbg_3

So far so good, let’s see what happens in a typical Questa simulation:

uvm_dbg_4

The above results of hdr being NOT generated occurs consistently for all seeds (See the forum post if needed). So a typical user suspects that the missing uvm_field_int macro does control the randomization – though not intuitive/true. This could consume quite a few debug cycles (recall that the macro above is commented for demo only, in actual work, as reported in that forum posting, user forgot to  add that at the first place).

A Debug assistant

Now as in our regular VSV training sessions (www.cvcblr.com/trainings) , we showcase the potential applications of post_randomize and one of the prominent ones is to “debug” the generated fields. See below code snippet:

uvm_dbg_5

With the above code added, here is what our friendly Questa has to show for us in simulation:

uvm_dbg_2

So clearly the hdr field does get randomly generated. It is only when a copy of the container class being created, it skips the “copy process”. And this has to do with the lack of macro. Focus on the missing/commented macro below:

uvm_dbg_1

Hope the above makes it self-explanatory – add the macro, you get copy/clone enabled for that specific field. So 2 lessons learnt today:

1. Use field macros consistently

2. More importantly, use post_randomize as your friendly, automated debug assistant for random generation!

Saturday, February 9, 2013

Simple assertion can save hours of debug time

Recently a user sought to assign a 4-state array (declared as logic) from the DUT side to a 2-state, bit typed array on TB side. Quite normal and intelligent choice of datatype – as all the TB components at higher level should work on abstract models. However there are 2 important notes – one on the “syntax/semantic” and other on real functional aspect.

Focusing on the functional aspect first (as the semantic would be caught by the compiler anyway), what if the DUT signal contained X/Z on the 4-state array value?

 

svd1 

 

When you assign it to the 2-state array counterpart on the TB side – there is information loss and potentially wrong data :-(

 

svd2

Here is where a simple assertion could save hours of debug time for you. Recall that SV has a handy system-function to detect unknown values. One could write a simple assertion using that function at the DUT-TB boundary. See the full code below, with the assertion part highlighted:

 SVD_SVA

With the SVA included, here is a transcript – Thank GOD, I used assertions :-)

Picture2

So next time you move data from DUT-2-TB, consider this simple trick.

For those wondering what’s the compile time issue in dealing with 4-state vs. 2-state, read VerifAcademy forum @ http://bit.ly/11xsgO0

 TeamCVC

Pragmatic choice of ABV language - PSL still shines better than SVA

 

As many of our readers would recall, CVC first became very visible to the industry with our early contribution to the assertion-based verification (ABV) via IEEE-1850 PSL (Property Specification Language). Back in 2004 we co-authored our first book on this wonderful language, first of its kind in the temporal assertion languages to become a standard (See our timeline in Facebook for more). Since then it has been a wonderful run of events in this world of functional verification for close to a decade by now.

One of the significant features of PSL has been its simplicity and succinct means of expressing complex temporals through its “Foundation Language” (a.k.a LTL style) subset. We talk about this in detail in our PSL book (http://www.systemverilog.us/psl_info.html). Recently a user came up with a nice requirement at Forum in Verification Academy (See: http://bit.ly/14JTHlI)

The spec goes as follows:

PSL_in_SV_spec

The user attempted a simple SVA 2005 style, but got weird results, then our beloved co-author and guru of assertions, Ben Cohen provided assistance as below (unverified):

SVA05

Do the same in PSL with FL/LTL style:

PSL_in_SV

Now relate the PSL code back to user spec/requirement:

May be simple, but drives me crazy..

"Req" -> "Gnt" -> "Rel"

When granted, assert if it is going to Idle state before releasing the lock.

Won’t you agree that PSL with its FL/LTL style is lot closer to the spec than the erstwhile SVA-05 sequence based approach?

There is light at the end of tunnel:

1. PSL works well, nice and is usable in all flavors – Verilog, SV, VHDL, SystemC etc.

2. It costs nothing extra in tools – if you have paid for SV, it is very likely you got the PSL too

3. SV 2009 standard did add this LTL features into it, but yet to be supported by many vendors. So your chances of using it in live projects is weak. Of-course push your vendor for it though.

Bottomline – use what works today, PSL is alive & kicking and you’ve already paid for it in your tool. There is hardly any extra learning – if you know one temporal language, the syntax is very similar, so why not get pragmatic and use it!