Jump to content

Recommended Posts

Posted

Let's say I have the following DUT.  The UVM environment contains a chain of models/predictors.  Input data flows down this chain and generates the expected CHIP output, which is compared to actual.
Pros: verifies top-level functionality.
Cons: Does not verify block level functionality.
9142016_1.jpg

A good start, but I'd like to also verify the blocks in a system setting.  So, I create block-level environments, then reuse them at the top level.
9142016_2.png

Awesome, but wait a minute.  I still need the top-level verification (Input-to-Output) like in the first example.   However, all 3 of my block predictors are being used in their corresponding environments' scoreboards, hooked up to the RTL, via agents.

How does one do both?  Surely I'm not supposed to instantiate duplicate copies of my block level predictors to create the end-to-end model chain...

Posted

Correction to picture 1: it does verify block level functionality, but you get poor visibility in case something doesn't work, it's available late, etc. (all the disadvantages of not splitting up design/verification into blocks). For the whole system to be working properly it's necessary (but not sufficient) for the blocks to work properly.

 

If you have a system that is a simple cascade like this, the only thing you'd want to check is that everything is stitched together properly (e.g. outputs from A get routed properly to B's inputs and so on). You could do this via a formal app for connectivity. You could also make sure do it in simulation by having your agents tap on the other side of the connection. What I mean by this is that, for example, your B interface agent for the A env gets connected to the B block's signals and not to the ones in the A block. This way you implicitly check that stuff coming out of A reaches B.

 

If you want to have an end-to-end check like this, you'd need to build the chain yourself. This shouldn't be a big problem, since your "models" (or predictors or whatever you call them) should have some kind of predict(...) function that only returns the expected output transaction given some input transaction. This means you only need to do:

class e2e_scoreboard;
  // ...

  virtual function void write_input(input_trans_type input_trans);
    expected_trans_fifo.try_put(c_model.predict(b_model.predict(a_model.predict(input_trans))));
  endfunction

  // push output transaction to actual FIFO
  
  // pop from both FIFOs and compare
endclass
Posted

Tudor,

Thanks for the reply. 

 

"properly it's necessary (but not sufficient) for the blocks to work properly." - What do you mean by "but not sufficient" ?

 

If you have a system that is a simple cascade like this, the only thing you'd want to check is that everything is stitched together properly (e.g. outputs from A get routed properly to B's inputs and so on

 I understand and agree.

 

Revisiting this simple example I realized something so trivial it's almost embarrassing to admit ; I am still getting the top level functional checks using the block level environments.

My concern was this:  Using block level environments, each model gets its input from an agent, and not from a previous model output.  In my mind, that implied the input might be incorrect; messed up possibly by the RTL block.

However, I can guarantee the correctness to any stage because it was already checked in the previous stage.  In short, I am an idiot.

Posted

It's not sufficient that all the blocks work properly, because it might be the case that they aren't properly connected to each other. Outputs from some block are left hanging and the corresponding inputs are tied to some values. Cascaded block level checks can't really find this, if your observation points for each block level environment are its corresponding design block. Example: A can start read or write transactions, but the direction signal doesn't get passed to B, where it's tied to read. The A or B env checks won't fail, but the whole system is buggy.

Posted

Since that is cleared up, let's say I've got the following DUT.  I guess this would called a "data-flow" DUT.  The master is initialized via control interface B, then the master awaits commands on the same interface.

 

 

9_20_2016.jpg

 

I've got block level environments for components A, B, C, D.  I need top level functional verification, of course, so I need to verify the behavior of the Master.

The Master performs all kinds of low-level reads and writes on the shared peripheral bus. 

 

Option 1 (traditional): Have a scoreboard verify every peripheral bus transaction.

This sounds like a bad decision.  There are countless transactions, and I'd end up a 100% accurate Master model which is would change every time the Master software changed.  I really don't care if the Master performed a read-modify-write of a register.  I just care about the final result.

 

Option 2: Use the RAL, or similar technique, to verify the correctness of data moved around the peripheral bus, at particular intervals, or instances. 

Example use-case:

  • Master receives command "Component Status?".
  • Master talks to all components, gathering status values.
  • Master writes Status-Packet to Component A.

I'm not interested in HOW it gathered the status, I just want to verify the contents of the response packet.

 

Option 3...etc.

 

Am I on the right track?  Thoughts?

  • 6 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...