Jump to content
Sign in to follow this  
mwhite_cp

UVM reg coverage question

Recommended Posts

We would like to use multiple types of coverage using UVM register models. One is for register testing, and the other is for functional testing. We want to do exhaustive testing for register testing, we would target some typical and corner cases for functional testing. So we want to have separate coverage models for register and functional tests. What is the recommended way to manage the different coverage models? Thanks!

Share this post


Link to post
Share on other sites

Hi,

For stimulus

Use builtin UVM_REG sequences to check reset values, bitbash registers, etc then add your own sequences if the built-in sequences are not exhaustive.

For coverage :

Generate the coverage model for the register using ralgen with -c option.

-c b|a|f|F : generate the specified coverage models.

b: bit-level

a: address map

f: field values, only if 'cover +f' is specified in RALF spec

F: field values, even if 'cover +f' is not specified in RALF spec

Multiple coverage models can be specified at the same time

thanks,

adiel.

Share this post


Link to post
Share on other sites

I think Adiel addressed the first part of your question: coverage for register testing, but not the main point which is coverage for functional testing. The kind of coverage that comes 'for free' with the UVM Register model and generator tools is for register testing only.

The main thing to think about for functional testing is that it is insufficient to measure what the register bits are set to, and separately measure some functional activity on an interface of your DUT. Instead, we have to combine those together, and measure (a) that the registers were set a certain way and (B) that some sufficient functional activity or traffic took place across your DUT that used that particular register setting in a meaningful way.

My approach here is to attach a coverage model (that you design yourself, it is design specific) to your scoreboard, and make the register model state available to it (the handle to the model is probably already used by your scoreboard anyway) along with analysis reports of the relevant functional traffic. Hook up the scoreboard to the coverage model with an analysis port which carries a container type, which contains transactions and register model. This allows you to write meaningful cross coverage specific to your test plan, even though the register and various functional transactions are temporally separate.

So we have two completely separate coverage models, but the UVM Register model participates in both. To 'manage them' do the following with standard UVM configuration techniques:

- enable the register test coverage model only for register tests

- enable the functional test coverage model only for functional tests

Remember that 'free coverage' from UVM Registers is good, but you get what you pay for, and it's not the whole story :-)

See my DvCon paper on Scoreboarding which has a section on Coverage describing the technique above.

Share this post


Link to post
Share on other sites

Hi Gordon,

Thank you very much for your response and pointer to the paper. I am currently reading the paper, I appreciate it.

My environment uses multi-stage predictors to predict the final outputs. The final expected outputs and actual outputs are sent to scoreboard and compared. The predictors get register model handles but scoreboard does not.

I have a few questions regarding to your method described here. I hope that I am not asking too much.

1. Do you think your method will work with my environment? If so, how should I do so? If not, what do I need to change?

2. How would you hook up register model with analysis port?

3. My initial plan was to create a register coverage model for register testing, then add functional coverage for functional test. There are so many registers and tables for our design, it is quite cumbersome to do this by hand. So I thought about creating 2 sets of register models, one for register testing, and the other for functional testing. What do you think of this method?

Thank you very much!!

Share this post


Link to post
Share on other sites

No problem, you are welcome:

1 & 2: you would need to draw a box around the {multi-stage predictors and scoreboard} and put them in a single 'container' component, and give that component an analysis port. The transaction emitted would need to be a compound of the various original traffic transactions processed by your predictors (not the 'converted' items sent to the scoreboard, but more likely need to be the 'original' items), and also have a handle to the register model.

3: interesting idea. I think the way forward might be to get the software tools we use to generate the register model (which also generate the 'free' register coverage) to optionally generate a user-defined design coverage, from a template of some sort. You would designate which registers/fields you wanted to participate in that extra set of covergroup code. More work required to flesh this idea out. Maintaining two sets of registers sounds like a cumbersome task too, but I see why you are thinking of that. Good luck, happy to help further.

Share this post


Link to post
Share on other sites

Hi Gordon, thank you very much for your response again.

1 & 2: This is my interpretation of your suggestion. If I am wrong, please let me know. I can use something like uvm_component or uvm_subscriber and instantiate all the predictors and scoreboard. This container component should have coverage code and utilize the transactions received from predictors/scoreboard and the register handle to create covergroups.

3: Maybe no need to create multiple register models if register covergroups can be enabled only in register testing and the functional covergroups can be enabled only in functional testing. Can this be done? If so, could you tell me how to do so.

Thanks again!

Share this post


Link to post
Share on other sites

1&2: Yes, exactly. you can choose whether the container has the coverage inside or outside. I would add an analysis port, and put the coverage outside for a degree of decoupling (e.g. vertical reuse might want some predict/scoreboard functionality but not reuse of coverage). Not essential though.

3: no 'magic' configuration mechanisms for this - just make your own controls - you would use the standard UVM mechanisms for set/get config using the config/resource database, and have control over what coverage is turned on or off, across all the coverage in your bench. Vertical reuse also benefits from this kind of config, as typically unit coverage would be turned off one level up. Different tests can turn on/off different coverage object / covergroup instantiations. I don't have an example handy, there is probably one to be found on Verification Academy.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...