Jump to content

predictor / TLM model paradigm

Recommended Posts

What we've done in the past is have "phantom models" that are highly coupled with the RTL state machine. E.g. The RTL got a command, and performed:
Read 0x1, Read 0x2, Write 0x1, Write 0x2.

our "model/predictor" would do the exact same thing.

Now, if the RTL changes at all, adds an extra write, or changes the order, the model is wrong.

I think our paradigm of performing checking, and building models is incorrect. It's coupled too tightly to the implementation details. Agree?

I'm guessing the model needs to care more about FUNCTION, and less about IMPLEMENTATION. Agree?


Link to comment
Share on other sites

Agree, although I cannot speak for everyone's design or algorithm that they are trying to verify.


There are generally two types of predictors:  "white models" and "scoreboards." There are plenty of other names for the same things, but generally they boil down to those two.


The difference between them is that the former tries to emulate exactly what the RTL does and perform a cycle-by-cycle comparison, whereas the latter sees stimulus input and predicts what the design will do at some point in the future. Of course, there are varying degrees to which these descriptions hold true. For example, a white model might not handle error cases very cleanly, but generally gets everything else correct.


The advantage of the white model is that an error is generated very quickly once the device gets out of sync. The disadvantage, as you said, is that as the implementation matures the white model must be re-coded every time. This, in my experience, is a fatal flaw. But again, it depends on what you're trying to verify.


The scoreboard approach, meanwhile, provides flexibility for the design to change over time. It also usually has less "granularity" to its predictions. For example, if a series of memory writes to contiguous addresses are expected, followed by a "DONE" cycle, a less granular scoreboard would not predict each and every memory write transaction and the done transaction. Instead, it would model the memory and only look at the final results once the DONE is seen. This approach gives the design the flexibility to chop up these writes as it sees fit and makes things easier for us.


Especially in a constrained random environment, because the variety of stimulus can be arbitrary and outrageous, the white model is generally unacceptable.

Link to comment
Share on other sites

The models we have used are untimed, behavioral models.  They are not white models.


I think what you described about the memory example is what we are looking for.


For example.  If we receive an "INIT" command on interface A, we should expect a series of writes to occur on interface B.

The predictor can predict the final state of the memory/register space of the device hanging off interface B.


Use case:

Write Register 0: 0xAB

Write Register 1: 0xCD

Read Register 2 (for read-modify-write)

Write Register 2: bit 7 asserted

Write Memory locations 0 - 63: 0xFF


As the verification engineer, I don't want to worry about things like if the designer had to do a read-modify-write, or what order he decided to write the memory (perhaps he started at location 63, and went backwards to 0?).

I want to know, at the end (like a DONE signal), that "actual" memory reflects what I predicted.


does that sound right

Link to comment
Share on other sites

I like bhunter1972's reply.  My only other comment is that aside from using scoreboard  one could, in many cases, use assertions.  

This is because we're eventually testing the requirements, and in many cases, assertions with SVA can do that in a mroe readable and executable manner. 

I make this point in the following papers: 

See my White paper: "Using SVA for scoreboarding and TB designs"
and a related issue at the Verification Academy the following paper
"Assertions Instead of FSMs/logic for Scoreboarding  and Verification"
available in the verification-horizons October-2013-volume-9-issue-3 
and  "SVA in a UVM Class-based Environment"
Ben Cohen
* A Pragmatic Approach to VMM Adoption 2006 ISBN 0-9705394-9-5
* Using PSL/SUGAR for Formal and Dynamic Verification 2nd Edition, 2004, ISBN 0-9705394-6-0 
* Real Chip Design and Verification Using Verilog and VHDL, 2002 isbn 0-9705394-2-8 
* Component Design by Example ", 2001 ISBN 0-9705394-0-1
* VHDL Coding Styles and Methodologies, 2nd Edition, 1999 ISBN 0-7923-8474-1 
* VHDL Answers to Frequently Asked Questions, 2nd Edition ISBN 0-7923-8115
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...