Jump to content

cliffc

Members
  • Content Count

    47
  • Joined

  • Last visited

  • Days Won

    2

Reputation Activity

  1. Like
    cliffc got a reaction from karandeep963 in Program blocks   
    Death to Programs!
     
    It took me years to arrive at this conclusion, but I now believe that programs were a bad idea and should be avoided completely.
     
    After some convincing by other committee members, I voted for inclusion of programs into the SV Standard and I now regret that vote (please forgive me!     )
     
    The idea behind the program was that if an engineer applied stimulus on the active clock edge, the RTL design would completely respond to the clock, then the testbench would calculate new stimulus values on the same clock edge and send them into the RTL design, and any combinational inputs to the RTL design would then update before moving to the next clock edge. The idea was to avoid co-dependent, 0-time race conditions between the RTL and testbench execution.
     
    The point of the matter is, although it is a semi-common practice, you should never send stimulus on the active clock edge. In real hardware, this is known as a setup or hold-time violation and is never done. In 0-delay RTL simulations, if done properly, it works (but can be subject to a variety of race conditions). In gate-level simulations (GLS), this will violate setup and/or hold times, which means you have to modify the testbench to run with different timing values to perform GLS.
     
    I currently send stimulus using a time-budgeting scheme with my clocking blocks so that stimulus is sent 10%-20% of the clock cycle after the active clock edge (allowing for hold times and to meet long-combinational-input gate-delays). This allows me to use the same testbench for 0-delay RTL and GLS, and closely mimics the behavior of real hardware. If you don't send stimulus on the active clock edge (which you should never do), you don't need a program block.
     
    As has already been discussed in this thread, if you use programs, you have confusion, limitations and rules to follow, including the fact that a program cannot instantiate a module, and a module cannot call tasks/functions from a program. Removing programs greatly simplifies testbench development. Programs were a semi-good idea gone bad.
     
    As long as you do not have stimulus driven on the active clock edge (which is a bad idea - have I repeated that enough?), you can replace all program-endprogram keywords with module-endmodule. Chris Spear's very good SV Verification book uses programs. I have already informed my good friend and respected colleague that he should change all occurrences of program-endprogram to module-endmodule.
     
    I have never needed to use programs and standard UVM verification techniques do not use programs. Program usage is highly discouraged in my training classes.
     
    Regards - Cliff Cummings
    Verilog & SystemVerilog Guru
  2. Like
    cliffc got a reaction from David Black in Using $display in UVM   
    Hi, Dave -
     
    I agree. UVM Guideline #1 - quit using $display!
     
    I am still trying to find where I read the part about using $display for table headers (and again, I disagree. You should not use $display ... period!)
     
    Regards - Cliff
  3. Like
    cliffc reacted to KathleenMeade in UVM_ALL_ON -vs- UVM_DEFAULT   
    Hi Cliff,
    My recommendation is to use UVM_DEFAULT instead of UVM_ALL_ON even though they both essentially do the same thing today. At some point the class library may add another "bit-flag" which may not necessarily be the DEFAULT. If you use UVM_ALL_ON, that would imply that whatever flag it is would be "ON".
    Kathleen
  4. Like
    cliffc reacted to ajeetha.cvc in UVM_ALL_ON -vs- UVM_DEFAULT   
    Same here - many of our customers use these macros and often don't buy into this performance argument. Especially if they come from Specman background they got used to these stuff built-in to the language. With all due respect to AdamE's excellent paper - why can't the VIP-TSC decide one way or the other for the benefit of large user base? Having something defined with > 1000 lines of code in base class and saying don't use - doesn't seem to go well - atleast with our customers here.
    On the original point/debate - interestingly we prefer ALL_ON to DEFAULT as it is more "explicit" in naming and one can make exceptions via:
    ALL_ON | NO_COPY
    for instance. With DEFAULT - it is possible that a newer version of UVM base code might change the definition of default, and one needs to update the code!
    Any other views please?
    Ajeetha, CVC
    www.cvcblr.com/blog
  5. Like
    cliffc reacted to bhunter1972 in UVM_ALL_ON -vs- UVM_DEFAULT   
    In my opinion, the recommendation to just never use the field macros is wrong-headed and foolish. I, too, attended Adam's presentation and found it to be compelling but not overwhelming. You should certainly know what they do and how to use them, and in this regard the existing documentation is pretty inadequate. Once you understand how to use them properly, they offer numerous advantages over the do-it-yourself approach, the most important of which is consistency. Further, the use of the macros has the benefit that as the UVM codebase improves, your code will receive those improvements, too.
    But, that is a much longer thread, and may lead to some sort of jihad.
    To your original question, Cliff, there is a slight difference between UVM_DEFAULT and UVM_ALL_ON. From uvm_object_globals.svh (in 1.1b and 1.1c):
    //A=ABSTRACT Y=PHYSICAL
    //F=REFERENCE, S=SHALLOW, D=DEEP
    //K=PACK, R=RECORD, P=PRINT, M=COMPARE, C=COPY
    //--------------------------- AYFSD K R P M C
    parameter UVM_DEFAULT = 'b000010101010101;
    parameter UVM_ALL_ON = 'b000000101010101;
    So, UVM_DEFAULT turns on the "D"EEP bit, whereas UVM_ALL_ON (ironically) has it turned off.
  6. Like
    cliffc reacted to bhunter1972 in UVM_ALL_ON -vs- UVM_DEFAULT   
    Efficiency in terms of what? Compile time? Development time? Run time? Or debug time? You have to be specific about what you feel is wrong with the macros and then ask which is most important to you. If the only argument against is that your users misuse it because they don't understand it, then educating them properly is the correct answer.
    Do the macros increase compile time? Sure, because they add code that is not always needed. But, compile-time is the cheapest of all of the above.
    Do they reduce debug and development time? When used properly, yes, I find that they do.
    Adam's presentation profiled OVM. From OVM to UVM he admitted there was already a performance improvement. Will there always be improvements? Hard to know.
    But, his solution that you simply write the print(), copy(), and compare() functions yourself (because its just so easy?!?) greatly increases your development time and the amount of time you spend debugging the mistakes you make. We're here to find bugs that designers make, not make more of our own. The macros let you avoid all of that.
    I read your answer to the forum question (https://forum.verificationacademy.com/forum/verification-methodology-discussion-forum/uvm-forum/29427-uvmconfigdbset-method-does-not-work). You avoid educating your users about the differences between components and objects and you instead just say "don't use them." That seems like the lazy approach to me.
    Brian
  7. Like
    cliffc reacted to ljepson74 in simple override example - with error   
    Thanks a lot for stepping up to that one, Cliff.  That answer is very clear to me and was well worth the wait.
  8. Like
    cliffc got a reaction from ljepson74 in simple override example - with error   
    Hi, Linc -
     
    I'll take a stab at this one.
     
    Without knowing the Doulos terminology, I think they are referring to "static hierarchy" (top-module, DUT and interface), "dynamic hierarchy" (UVM testbench components - class based), and "transaction objects" (class based data - sequences and sequence_item transactions).
     
    UVM has a predictable set of phases. All UVM testbench components are first built (build_phase) similar to module instantiation during compilation, once all components have been constructed in the build phase using factory calls, then we can connect them (connect_phase) which is little more than passing class handles to each other (since classes cannot have module-like input/output/inout ports) equivalent to wiring instantiated modules together during the elaboration phase, then we can run a simulation.
     
    This is pretty much the same as building a Verilog testbench using modules. You are not allowed to start instantiation (compile) then start wiring modules together (elaborate) then go back and instantiate more modules after elaboration, and then start simulating for 10ns and then instantiate more modules, wait 10ns and then wire the new modules together.
     
    UVM components are referred to as being semi-static, because they are built after you execute the run_test() command at time 0 and they remain in place and unchanged until the run phase is done, which is the final time-slot of the simulation. Since testbench components are classes, they are dynamic, but for all practical purposes, they are as static as the rest of the testbench but they are created at time-0 as soon as you call the run_test() command in the top module.
     
    Hope this helps.
     
    Regards - Cliff Cummings
    Verilog & SystemVerilog Guru
  9. Like
    cliffc reacted to uwes in `uvm_info_context - why use it?   
    hi,
     
    the *_context variant of the macros can be used when you want the message to appear as if emitted by another object. a common use model would be if all your components use a common helper class which emits a message then typically you want instead of your common helper class appearing in the message the parent component to be printed. 
     
    the _context macros are just
     
     
    `define uvm_*_context(ID, MSG, CNTXT) \    begin \      if (CNTXT.uvm_report_enabled(...)) \        CNTXT.uvm_report_* (...); \    end    
×