Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by tudor.timi

  1. @Alan I guess you mean SystemC TLM, not TLM2. In any case, that's pretty good feedback. I've updated the example on EDAPlayground and there it seems that modifying any field of an inner object isn't allowed if the handle to the parent object is defined as const.
  2. AFAIK, subroutine calls in SystemVerilog use pass by reference semantics. This means that what you say (that the const keyword only affects the handle) doesn't make sense, seeing as how input references can't anyway be modified. The LRM specifically states in Section 13.5.2 Pass by reference that using the const ref qualifier for an input argument means that the value pointed by the handle cannot be modified and any attempt to do so results in a compile error. Here's an example on EDAPlayground that shows this in action. The UVM analysis port doesn't indeed use constant input arguments. At
  3. These are fixed at elaboration time. Some simulators offer ways to set parameters from the command line to a value different that the one set inside the code. This is conceptually similar to a force on a signal. You'll have to look up the exact switch inside your tool's documentation. If your simulator doesn't have this feature (though I expect all Big Three simulators to have something along these lines), you can work around it by setting up a package in which you define your parameter/generic values and pass those to the DUT. To vary the values of these parameters you'd need to compile d
  4. It's not really clear to me what you mean by parameters of the DUT. Do you mean actual Verilog/SystemVerilog parameters (which are static during a simulation run) or just configuration options?
  5. You forgot to register your a_config class with the factory: class a_config extends uvm_object; `uvm_object_utils(a_config) int buswidth; extern function int get_buswidth(); endclass The `uvm_object_utils macro declares the type_id field (along with some others).
  6. @David That would work, but it relies on using a singleton as a global instance, which is usually frowned upon in software development. Using the config DB also means relying on a singleton (the DB itself), but at least using the complete path to set a config setting is more portable for vertical reuse. When relying on the event pool, there is a chance that the same key is used in multiple places (I guess this is what you meant by not selecting the string too casually).
  7. Since you only have one interface you're driving (the DUTs RX if), you can make your life easier and just pass a handle of the RX monitor (looking at the DUTs TX interface) to your sequence using the config DB: class my_sequence extends uvm_sequence #(my_item); // handle to the monitor rx_monitor monitor; task body(); // get the monitor handle uvm_config_db #(rx_monitor)::get(p_sequencer, "", "monitor", monitor); // drive first TX wait (monitor.item_finished_e); // drive second TX endtask endclass Somewhere in your testbench you do the corresponding "set". Yo
  8. One more thing to consider: if your DUT's RX interface is unidirectional (i.e. no handshake mechanism), then you don't need any responder sequence. A simple monitor would just suffice. In this case, your virtual sequencer should contain a handle to the TX sequencer and one to the RX monitor. To synchronize, you can use the event from the monitor that signals that an item was collected.
  9. Since you want to coordinate stimulus across two interfaces, a virtual sequence makes the most sense in this case. You'd need a virtual sequencer with handles to both agents' sequencers. The virtual sequence running on this sequencer would dispatch the TX and RX/responder sequences to the appropriate sequencers. I'm assuming you've watched Tom Fitzpatrick's tutorial on responder sequences. Let's say yours would look like this (pseudocode off the top of my head): class rx_responder_seq extends uvm_sequence_item #(my_item); // event we can use to synchronize when the DUT sends out // s
  10. When the DUT sends out a transaction, don't you have some kind of responder (aka slave) sequence that services that? This could be your sync point.
  11. The way you have it set up now is that the size_c constraint is part of the test_vseq scope. This means it only applies when an instance of test_vseq gets randomized. The `uvm_do_on(...) macro includes a call to randomize() on the item/sequence you pass to it. This means the cfg sequence will get randomized without the size_c block. You have two choices here. 1. Use inline constraints: `uvm_do_on_with(cfg, p_sequencer.cfg_seqr, { cfg.size_x == this.size_x, .... }; 2. Randomize cfg in test_vseq's scope: `uvm_create_on(cfg, p_sequencer.cfg_seqr) if (!this.randomize(cfg)) `uvm_fatal("R
  12. Since the discussion is about CAN, wherever I said "1" agent I mean one CAN agent. Any other interfaces you might have (for programming your module, for example) should of course also have their own agents.
  13. I've done some work on the topic and for the TLM interface David is describing we basically had an interconnect component that sent the item flowing through it out an analysis port connected to the monitor. The initiator socket was connected to the DUT and the target socket to the driver. You just have to take care when you send the item through the aport (basically when the protocol steps are over).
  14. Also, I remember reading some blog posts on the topic: http://bryan-murdock.blogspot.de/2014/10/systemverilog-streaming-operator.html http://bryan-murdock.blogspot.de/2014/10/more-systemverilog-streaming-examples.html Have a look at these and see if you can figure out what's happening.
  15. A suggestion for this type of questions: use StackOverflow. There is already a category for SystemVerilog and the you're more likely to get a quick answer.
  16. @David Noob question: What's a re-clocked synchronous interface?
  17. Even afterwards, catching timing issues is best done with other tools, not with simulation. Simulation is there to show you that the RTL works, IMO. STA and CDC verification tell you that you don't have timing problems. You already have a huge stimulus space to cover when just considering that your signals come right after the clock edge. Adding randomness to the signal timings is only going to blow up your stimulus space even more. You'd be trying to model way too much.
  18. I don't think you're going to achieve anything by simulating random delays w.r.t. clock edges in RTL simulations. If you want to see what effect signal delays have you need to run this on (timing annotated) gate level, but that defeats the purpose of doing architecture explorations, 'cause to get to the netlist you've done all steps of the design process. Also, clock jitters aren't going to bring you anything either. If you care about clock jitter for any clock crossings, then there are specialized tools for this. Personal opinion: Architecture exploration is something abstract. You wa
  19. I'd rather say you need one CAN agent per CAN bus (if you have multiple), not per CAN node.
  20. It depends on what you have to do. I'm guessing you don't really need to verify each of the individual sub-modules individually, so one big UVM env would be enough. As for the number of agents, it again depends on your objectives. I've been playing around with a CAN controller that has 2 nodes and one agent was fine enough for me. CAN anyway has a broadcast architecture so you can communicate to one agent from either of your DUT's nodes.
  21. Hi Judy, I think an even better way of doing it is to call the reset() function whenever the real reset signal is asserted. Somewhere in your testbench you should hook-up these two things and you won't need to call reset() in your sequences anymore.
  22. You guys are comparing apples vs. oranges here, then. In this case, the simulator doesn't do what the code tells it do, but implements its own backdoor. This is why it works for your own static variable, but not for m_inst_id. It's a cool feature, I give it that, but it causes confusion.
  23. Even so, using your idea you can still add a "`define CLASS_PKG_DEFINE" before including the class instead of importing the package and you haven't solved anything (I'd even go so far as to say you've done more harm than good). This is an issue that you can't fix by coding alone. You have to train people to use packages, pure and simple. If you use a linting tool, you could also catch such issues, by not allowing any user code (except packages) to include any file (maybe with the exception of uvm_macros.svh).
  24. In the package file, if you add the define before you include the class file, then the condition for the ifndef won't be true anymore and you won't include the class definition. This means your package file should just be: // file: class_pkg.v package class_pkg; `include "class1.vh" endpackage Adding define guards for files that are included is AFAIK the way to go. What you usually do is name the define with your file name: //file: class1.vh `ifndef CLASS1_SVH class class1; ... endclass `endif I get that you're trying to avoid the situation where someone includes the class directly in the
  • Create New...