Jump to content

Logger

Members
  • Content Count

    62
  • Joined

  • Last visited

Everything posted by Logger

  1. You should use a register model with multiple uvm_reg_maps. One per interface. Assuming you're generating your register model using some sort of tool (rather than by hand), you need to find out how your tool supports that.
  2. You need to provide more detail. Like, why do you currently have to select one at compilation time? What exactly do you mean by parallel access? Do you have 3 separate physical interfaces which can simultaneously access the same register? Or as often the case, do you have 3 masters which can all access the same registers via a fabric and a single physical interface to the registers? What are you using for your register model? RAL ( uvm_reg ) ?
  3. Looking for suggestions on the best approach to modeling something akin the following. value1 and value0 are implemented as value_reg[31:0] in RTL. The value actually stored in this register is always whatever you wrote to it. However, what you read back, and what HW sees when it looks at this register depends on the value of value_mode. When value_mode == DIRECT, you'll read back the whatever value is physically stored in value_reg[15:0] as value0 and value_reg[31:16] as value1. When value_mode == MULT, you'll read back a computed value instead. Let quotient == value_reg[15:0] * value_reg[31:16]. Then you'll read back quotient[15:0] as value0 and quotient[31:16] as value1. Right now I've just added tasks write_value( bit [15:0] a, bit [15:0 b ), read_value( output bit [31:0] quotient ) to the register model, which look at value_mode before accessing value_reg. In both cases, write_value() just writes parameters a and b to value0 and value1 using the register model. When value_mode == DIRECT, read_value() reads value0 and value1 multiplies them and returns the result. When value_mode == MULT, read_value() reads value0 and value1, concats them and returns the result. I'm contemplating adding some kind of virtual register to the register model instead of the tasks. Then, implement the logic using either callbacks or a custom front door. It also needs to preserve support for multiple address maps. The goal is to prevent multiple scoreboards and coverage classes which are referencing these registers from having to implement the same computation. The idea of a virtual register is to provide the same uvm_reg API to the user as other registers. A side benefit is the user only makes a single call to read the 32 bit value, rather than making two calls. Cheers.
  4. The short answer is yes. However, from the way you worded your question, I'm immediately inclined to use the PLI/VPI instead of the DPI. There are a lot of ways to skin this cat. If you want more guidance, you'll need to give more details about what you're trying to accomplish.
  5. Seen quite a few posts on various forums like this. Moving your register model to UVM-1.2 yields a bunch of warnings like this. [uVM/RSRC/NOREGEX] a resource with meta characters in the field name has been created As far as I can tell this is a bug in uvm_reg_block::configure(). If special regex characters are not allowed in the call to uvm_resource_db#(uvm_reg_block)::set(), then this function needs to sanitize the return value from get_full_name before passing it to set. get_full_name() is return hierarchical paths with dots in them, as it is supposed to, but those are regex characters with ::set() complains about. function void uvm_reg_block::configure(uvm_reg_block parent=null, string hdl_path=""); this.parent = parent; if (parent != null) this.parent.add_block(this); add_hdl_path(hdl_path); uvm_resource_db#(uvm_reg_block)::set("uvm_reg::*", get_full_name(), this); endfunction I am not alone: https://goo.gl/REafnv -Ryan
  6. This section of the LRM is vague. 18.11 In-line random variable control What is the expected behavior for the following code? class child; rand int a; rand int b; constraint cb { a inside {[0:100]}; b inside {[0:(2*a)]}; } endclass class parent; //Uncomment to force desired behavior: rand int a; rand child c; constraint cb { //Uncomment to force desired behavior: a == c.a; c.b >= c.a; } //Uncomment to force desired behavior: function void pre_randomize(); //Uncomment to force desired behavior: a = c.a; //Uncomment to force desired behavior: endfunction endclass module top; initial begin parent p = new; child c = new; c.a = 10; p.c = c; void'( p.randomize( c.b ) ); $write( "c.a == %0d ( expecting 10 )\nc.b == %0d\n\n", c.a, c.b ); c = new; c.a = 50; p.c = c; void'( p.randomize( c.b ) ); $write( "c.a == %0d ( expecting 50 )\nc.b == %0d\n\n", c.a, c.b ); $finish; end endmodule In my simulator I get:
  7. Bonus points for testing what happens when you change the constraint to the following: constraint c_1 { soft var1<100; }
  8. What am I missing here? Say you model an APB transaction that includes a cycles_before_delay member. The driver will have to implement that like this: seq_item_port.get_next_item( item ); // stall for item.cycles_before_delay; // do transaction seq_item_port.item_done( ); During that stall, no other transaction will be able to execute on that driver. Now there was the suggestion of forking this, but you can't call get_next_item() again, until you first call item_done(). Calling item_done() before the transaction is actually completed, is a non-blocking completion model, as the sequence will return from finish_item() immediately.
  9. So in the scenario where the spec says write register A, then wait 40 cycles before writing register B. You propose modeling that 40 cycle delay at the end of A, or the beginning of B. I know that is how a lot of models do it, but that doesn't make it right. In the case of APB, I'd like to allow other sequences to write to other registers during that delay. Which implies I now need to implement what otherwise would have been a simple in order atomic driver, as a non-blocking driver, so it can fork threads and return immediately. In addition to that, the sequence using the APB transaction can no longer assume that the item is done after calling finish_item(), so it now also has to check the end_event. Furthermore, by default, the sequencer automatically triggers begin_event and end_event unless you define UVM_DISABLE_AUTO_ITEM_RECORDING. Which affects all sequencers not just the sequencer in question. That will change with UVM 1.2, but for the time being this flow is broken.
  10. Good points. I've been real lazy about directly referring to m_sequencer rather than using get_sequencer(). You point out a good reason to stop that. I concur. I don't like this approach for two reasons: First and foremost, the delay before or after a transaction is not part of that transaction (mixing abstractions agains), but it is part of the sequence containing that transaction. Hence, the sequence needs some mechanism to perform delays. And time based delays doesn't cut it, so it needs to be cycle delays. Second, it requires you use a non-blocking completion model in your driver. For an in-order atomic interface, the driver should not have to implement a non-blocking completion model to support interleaving of sequences. This makes me think uvm_sequence_base should have a wait_cycles() task, and probably several other standard utility methods. I agree the driver should be handling the actual implementation of these tasks, so somehow the library needs to connect the sequences wait_cycles() task to a corresponding one on the driver.
  11. I kind of have a gripe against directly referring to the interface from a sequence for these common use case scenarios. Different sequences on different interfaces can be doing the same thing, but implement it differently and it looks different. m_sequencer is the window/proxy by which a sequence knows which interface it is operating on. I was just meaning to say a virtual wait_cycles() method in the uvm_sequencer_base, would make it possible for all sequences to call m_sequencer.wait_cycles(x) without having to refer directly to the interface. That of course adds the burden of having to extend the base sequencer if you want to use that functionality. This would not create a strong type association. Also it would be so nice if a sequence could call, m_sequencer.get_interface(), but because interfaces lack polymorphism that API is not possible. I wish interfaces had some limited form of polymorphism.
  12. I have made it a standard practice to add a wait_cycles() task to all my sequencers to abstract this. Then in my sequences I can simply write: p_sequencer.wait_cycles(x); This requires you use the `uvm_declare_p_sequencer() macro in your sequence. Too bad that's not built into uvm_sequencer. -Ryan
  13. But that's the problem, this "standard operating procedure" is not a documented procedure, nor is it a single procedure. No two people do it the same way, and good luck finding an example when you google or search the UVM forums. You'll mostly likely come across someone raising the question, but it's never answered. So is there no canned solution, because we haven't yet agreed on how this should be implemented in the general case? Or is it just one of those things that UVM hasn't gotten around to yet?
  14. It would be really nice to be able to receive TLM messages in some sequences for the purposes of coordinating stimulus with monitored events. I have a solution which gets the job done, but it's not as clean as it should be, and this missing feature feels like a gaping hole in the methodology. I think a very nice solution would be to have uvm_analysis_imp#() class either implement the get() method, or maybe more clearly have a new method called wait_for_write( T t ). You could then extend your sequencer to have a uvm_analysis_imp#(), and use `uvm_declare_p_sequencer() in your sequence to provide access to the analysis port. Here's a partial example of what the user code would look like: class ItemSqr extends uvm_sequencer#( Item ); uvm_analysis_imp#( OtherItem ) other_item_analysis_export; // .. rest of class definition endclass class TestSeq extends uvm_sequence#( Item ); `uvm_object_utils( TestSeq ) `uvm_declare_p_sequencer( ItemSqr ) // .... virtual task body( ); OtherItem t; // .. do some transactions p_sequencer.other_item_analysis_export.wait_for_write( t ); // .. do some more transactions endtask endclass I basically implement this now, but UVM should do it for me. -Ryan
  15. Why does uvm_reg_map::get_n_bytes() return 0 if you call it on a system level map? That is completely useless and tells me nothing about the system. If on the other hand, I call uvm_reg_map::get_n_bytes() from a lower level map in my register model, then I get the narrowest bus width in the map hierarchy. That makes sense. It seems I should get that same value when calling get_n_bytes() on the system level map. Returning 0 is not useful under any circumstance.
  16. To be a little more precise here, the field macros do call uvm_config_db::get() for component fields in the component's build_phase. uvm_config_db::get() is not called for non-phased objects like uvm_sequence_items or plain uvm_objects.
  17. I figured out an example for master and slave ports. Not exactly what I was looking for, but since I didn't find such an example in the UVM docs I thought I'd post it here for you reading pleasure. program test; import uvm_pkg::*; class Master extends uvm_component; uvm_master_port#(uvm_sequence_item) port; `uvm_component_utils(Master) function new(string name="Master", uvm_component parent=null); super.new(name,parent); port = new("port",this); endfunction // new virtual task run_phase(uvm_phase phase); uvm_sequence_item item = new("master_item"); phase.raise_objection(this); while (!port.try_put(item)) #10; `uvm_info("MASTER", {"Put: ", item.get_name()}, UVM_LOW) port.get(item); `uvm_info("MASTER", {"Got: ", item.get_name()}, UVM_LOW) phase.drop_objection(this); endtask // run_phase endclass // Master class Slave extends uvm_component; uvm_slave_port#(uvm_sequence_item) port; `uvm_component_utils(Slave) function new(string name="Slave", uvm_component parent=null); super.new(name,parent); port = new("port",this); endfunction // new virtual task run_phase(uvm_phase phase); uvm_sequence_item item; phase.raise_objection(this); port.get(item); `uvm_info("SLAVE", {"Got: ", item.get_name()}, UVM_LOW) #10; item = new("slave_item"); port.put(item); `uvm_info("SLAVE", {"Put: ", item.get_name()}, UVM_LOW) phase.drop_objection(this); endtask // run_phase endclass // Slave class Test extends uvm_test; Master master; Slave slave; uvm_tlm_req_rsp_channel#(uvm_sequence_item) chan; `uvm_component_utils(Test) function new(string name="Test", uvm_component parent=null); super.new(name,parent); chan = new("chan",this); endfunction // new virtual function void build_phase(uvm_phase phase); master = Master::type_id::create("master",this); slave = Slave::type_id::create("slave",this); endfunction // build_phase virtual function void connect_phase(uvm_phase phase); master.port.connect(chan.master_export); slave.port.connect(chan.slave_export); endfunction // connect_phase endclass // Test initial begin run_test("Test"); end endprogram // test
  18. I want to create a master model that performs a non-blocking put to a slave. Later, I expect the slave to do a blocking put back to the master. So my master should have a uvm_nonblocking_put_port and a uvm_blocking_put_imp. I'm trying to figure out if one of those bidir port types includes that combination. It's kind of hard to decipher the TLM code. Anyone know of example code using the bidir ports?
  19. You could add an event to the sequence's events pool. Once you've randomized the sequence_item in the sequence, trigger the event passing the sequence_item as the parameter. The test can then wait on that event, and get the sequence_item to get at the address. edit: If this is a pre-existing sequence, that you don't want to modify, extend it instead. You can put the event trigger in the mid_do() or post_do() methods, which ever is appropriate.
  20. First, why do a type override of the from your module? You can do that in your test's build_phase instead. Since you are trying to access a member that is not present in the base_env class, you can't use a base_env class handle to access that member. You'll have to use a x_env handle. In this case, you'd have to cast the object like so: class x_test extends base_test ; function start_of_simulation(); x_env x; $cast(x,env); x.x_agent.set_report_verboisty(UVM_FULL); endfunction endclass
  21. Could you post the very obvious error message as well your verbosity override code or runtime switch?
  22. Oh sweet relief. Thanks Janick. I did not manage to find that one in Mantis. I look forward to this enhancement.
  23. I think the performance issue have largely been addressed since that article was written. Probably still room for improvement though. My biggest problem with the macros is tracking down the source of compile errors. Practice has made me a lot better at divining the source of the macro based compile errors. And I do mean divining in the divining rod sense. I still end up commenting out large blocks of code and doing a binary search occasionally, because the error message says something like "UVM is from Venus and Engineers are from Mars". That said, I think I'm still saving significant time by using the macros. I've been pushing on my vendor to add UVM compile time linting that checks for proper macro usage. Some simple sanity checks like making sure the class name passed to the utils macro actually matches the class name, would pretty much eliminate any hassles.
  24. What is the proper use of begin_tr() and end_tr() and their associated events? There are these very nice descriptions for begin_event and end_event. // Variable: begin_event // // A <uvm_event> that is triggered when this transaction's actual execution on the // bus begins, typically as a result of a driver calling <uvm_component::begin_tr>. // Processes that wait on this event will block until the transaction has // begun. // // For more information, see the general discussion for <uvm_transaction>. // See <uvm_event> for details on the event API. // uvm_event begin_event; // Variable: end_event // // A <uvm_event> that is triggered when this transaction's actual execution on // the bus ends, typically as a result of a driver calling <uvm_component::end_tr>. // Processes that wait on this event will block until the transaction has // ended. // // For more information, see the general discussion for <uvm_transaction>. // See <uvm_event> for details on the event API. // //| virtual task my_sequence::body(); //| ... //| start_item(item); \ //| item.randomize(); } `uvm_do(item) //| finish_item(item); / //| // return from finish item does not always mean item is completed //| item.end_event.wait_on(); //| ... // uvm_event end_event; Great! That is exactly what I want and would expect. Then begin_tr has even more verbiage that reinforces that: // Function: begin_tr // // This function indicates that the transaction has been started and is not // the child of another transaction. Generally, a consumer component begins // execution of a transactions it receives. // // Typically a <uvm_driver> would call <uvm_component::begin_tr>, which // calls this method, before actual execution of a sequence item transaction. // Sequence items received by a driver are always a child of a parent sequence. // In this case, begin_tr obtains the parent handle and delegates to <begin_child_tr>. // // See <accept_tr> for more information on how the // begin-time might differ from when the transaction item was received. // // This function performs the following actions: // blah blah good details here ... extern function integer begin_tr (time begin_time=0); begin_child_tr() and end_tr() have similar verbiage. Notice the lines I've highlighted. Now let's go look at the sequence side of things. According the the UVM user's guide, the basic execution flow of a transaction in a sequence is as follows (this is also what the uvm_do macro implements): Call start_item() to create the item via the factory. Optionally call pre_do() or some other functionality. Optionally randomize item.
d) Optionally call mid_do() or some other functionality, if desired. Call finish_item(). Optionally call post_do() or some other functionality. Optionally call get_response(). However, start_item() has this bit of code in it. `ifndef UVM_DISABLE_AUTO_ITEM_RECORDING void'(sequencer.begin_child_tr(item, m_tr_handle, item.get_root_sequence_name())); `endif Which in turn calls uvm_component::begin_tr(). *Screeeeeeeching halt sound* Whaaat? The function description for begin_tr() just said it is "typically called by a driver". But, it turns out that begin_tr() is in fact typically called by the sequence executing the transaction (indirectly via the sequencer). Well, it turns out that this is documented, but the documentation has a bit of a multiple personality disorder. Here's a snip from the uvm_transaction class documentation: //------------------------------------------------------------------------------ // // CLASS: uvm_transaction // // ... blah blah blah ... // // The intended use of this API is via a <uvm_driver> to call <uvm_component::accept_tr>, // <uvm_component::begin_tr>, and <uvm_component::end_tr> during the course of // sequence item execution. These methods in the component base class will // call into the corresponding methods in this class to set the corresponding // timestamps (accept_time, begin_time, and end_tr), trigger the // corresponding event (<begin_event> and <end_event>, and, if enabled, // record the transaction contents to a vendor-specific transaction database. I like how the intended use model (as previously stated) was reiterated here. Which is then followed by this single line, which finally gets around to stating how this thing actually works. // Note that start_item/finish_item (or `uvm_do* macro) executed from a // <uvm_sequence #(REQ,RSP)> will automatically trigger // the begin_event and end_events via calls to begin_tr and end_tr. Oh, so the default configuration of the UVM library is to not implement the intended methodology, and instead encourage bad behavior. That's nice. Now let's follow that up with this line trying to reiterate the intended use model again. // While convenient, it is generally the responsibility of drivers to mark a // transaction's progress during execution. Ok. got it. My drivers should trigger these events, not the sequences. This has been repeatedly stated at least 4 times up to this point! Surely it must be as simple as calling begin_tr/end_tr at the appropriate times in my driver and the start_item/finish_item methods will adapt accordingly, no? // To allow the driver to control // sequence item timestamps, events, and recording, you must add // +define+UVM_DISABLE_AUTO_ITEM_RECORDING when compiling the UVM package. Oh, of course!. Now that we've been repeatedly told what the intended use model is, we finally discover that to actually use it, we must set an obscurely named flag that seems to refer only to automatic transaction recording. Gosh, why didn't I think of that first? // Alternatively, users may use the transaction's event pool, <events>, // to define custom events for the driver to trigger and the sequences to wait on. Any // in-between events such as marking the begining of the address and data // phases of transaction execution could be implemented via the // <events> pool. Alternatively? It seems to me, that given that it's the default configuration of the library, it is the primary method to do so. And it also happens to be the only method I've seen demonstrated in every EDA vendor and service provider training I've seen on the subject. It goes on to talk about exactly why you should implement things in the intended way. // In pipelined protocols, the driver may release a sequence (return from // finish_item() or it's `uvm_do macro) before the item has been completed. // If the driver uses the begin_tr/end_tr API in uvm_component, the sequence can // wait on the item's <end_event> to block until the item was fully executed, // as in the following example. // //| task uvm_execute(item, ...); //| // can use the `uvm_do macros as well //| start_item(item); //| item.randomize(); //| finish_item(item); //| item.end_event.wait_on(); //| // get_response(rsp, item.get_transaction_id()); //if needed //| endtask //| // Yep. That is indeed the problem I want to solve. If only UVM would get out of it's own way. It looks like someone was tasked with adding automated transaction recording to UVM and they decided to hi-jack the begin_event and end_event in order to do so. Except they discovered a lot of VIP (most VIP?) didn't bother to make the appropriate calls to begin_tr and end_tr. So they said, "Why don't I make the calls for them!", and here we are. Unfortunately it breaks using begin_event and end_event for anything except automatic transaction recording. To build a "proper" agent, I'm supposed to call begin_tr and end_tr in my driver. This means I have to use +define+UVM_DISABLE_AUTO_ITEM_RECORDING, and lose that feature. On top of that, other VIP is not going to implement the calls to begin_tr and end_tr, because they are going to assume the sequence does it for them. So if you were using those events with other VIP, your tests are now broke until that other VIP is updated. Taking this VIP the other way doesn't work either. Calling begin_tr/end_tr from your driver and using it in a testbench that doesn't have the +define will cause double triggering of the events. Maybe that won't mess anything up, but it certainly could. So I ask these questions: ▪ Do you have a pipelined interface in your design? ▪ Are you interested in when your transactions start and end? ▪ Do you utilize begin_event and end_event? ▪ If not, what do you do instead? ▪ Would you utilize begin_event and end_event if they worked properly? ▪ If the default behavior of the library was changed to not automatically trigger these events, would it break your existing tests? ▪ Does anyone use the automatic transaction recording? I think this really needs to be fixed. It simply means creating a couple of events specifically for automatic transaction recording to use, rather than hijacking the begin/end events. If the automatic recording could take advantage of the developer manually triggering the begin/end events for greater accuracy, that would be great. But not doing so, would be no worse than we have now.
×
×
  • Create New...