Jump to content

Recommended Posts

Hi all, i'm new both to this forum and to the verification (and electronic in general) world.

My problem is that i'm not carrying out a good tb for my design. More specifically my DUT is a FSM in which the concept of transaction is not well defined (i have a lot of signals through different interfaces and, basically, they must match clock by clock the desired values). In order to test the dut's outputs my idea was to replicate (without knowing the design code) the fsm in the scoreboard and so test dut's output againts scoreaboad's output. The problem arises here: when  and how i have to update my fsm? My previous approach was to have a set of monitors for the dut's input and a set of monitors for dut's output and, basically, the code of my scoreboard had this structure 

forever begin

   update_fsm(tr_input_1, tr_input_2);


   check_output(tr_output_1, tr_output2);

in which the fsm was updated every clock because each monitor produce a transaction at each clock.

Of cource this is a very bad approach, i suppose....

What is the best architecture choice to do what i have to do?


Thanks in advance!

Link to comment
Share on other sites

I started with the one from Doulos (https://www.doulos.com/knowhow/sysverilog/tutorial/assertions/) and ASIC World (http://www.asic-world.com/systemverilog/assertions1.html), but most of the time now, I refer to the IEEE 1800 Standard document. IMO, simple assertions are pretty easy to write, but nevertheless very powerful. You should become productive in a few days.

Link to comment
Share on other sites

Assertions are a great hint, but now i'm facing a new doubt...let's say I have two agents that generate transactions on the same front of the cloc, causing calls to two different write functions on the scoreboard. These functions read/write the same variables. The doubts are:
1) must these variables be protected through semaphores? I suppose yes, if no internal mechanisms grant mutual exclusion

2) more problematic, is the order in which functions are executed random? because the result of the variables can change depending on the order of execution, and this lead to some inconsistency.

Thanks again!

Link to comment
Share on other sites

I thought to a possible trick to avoid such critical races. This is my idea:
1)in the scoreboard class, declare a set of static variables (so <= can be used) that will be used as input variables
2)input transactions, instead of execute logic, simply update these variables usign non blocking assignment

3)in the run_phase of the scoreboard class, use a forever begin cycle sensitives to posedge of clock and, inside it, perform all operations in the desired order


In my mind this should solve the problem cause the scoreboard operations (the logic) is always performed using old input values, independently from the order of execution of components. I tried a very simple example and it seems to work...


Is my reasoning correct? 

Link to comment
Share on other sites

One caveat I see here: one of the points of using TLM (aka transactions) is that it's faster. Interesting stuff doesn't happen at each clock cycle, but only every now and then (basically when a transaction finishes), so it's not necessary to check on every clock cycle. Having less events you're sensitive to makes the simulation (potentially) faster. If speed isn't a concern here or if you have to model cycle accurate behavior, then of course you can/need to clock your scoreboard.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...