Jump to content

bhunter1972

Members
  • Content count

    76
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by bhunter1972

  1. I've found a use for set_int_local and noticed that it does not work with dynamic arrays or queues. Issue can be demonstrated with this code: https://www.edaplayground.com/x/6B8e The problem appears to be on the first line of this code block: else if(uvm_is_match(str__, {__m_uvm_status_container.scope.get_arg(),$sformatf("[%0d]", index__)})) begin \ if(index__+1 > ARG.size()) begin \ int sz = index__; \ int tmp__; \ `M_UVM_``TYPE``_RESIZE(ARG,tmp__) \ end \ if (__m_uvm_status_container.print_matches) \ uvm_report_info("STRMTC", {"set_int()", ": Matched string ", str__, " to field ", __m_uvm_status_container.get_full_scope_arg()}, UVM_LOW); \ ARG[index__] = uvm_object::__m_uvm_status_container.bitstream; \ __m_uvm_status_container.status = 1; \ end \ At this line, str__ is set to "cfg.arr_var[0]", this: {__m_uvm_status_container.scope.get_arg(),$sformatf("[%0d]", index__)} evaluates only to "arr_var[0]", hence the match does not succeed. I believe that the code would be correct if it used get() instead of get_arg().
  2. UVM requires that the sequencer first stop its sequences and then the driver must be certain to not call item_done on any outstanding sequences. To get this working correctly, there's a simple recipe in this paper from a few years ago, and also in the book: http://sunburst-design.com/papers/HunterSNUGSV_UVM_Resets_paper.pdf
  3. Improving transaction recording

    Only if recording is enabled for that transaction. To do that, you need to call the enable_recording() function and give it a stream.
  4. I can do even better. You can go here and press Click Inside to look at it yourself: http://tinyurl.com/h7nfgbz
  5. how to use uvm_in_order_comparator

    I find it to be a perfectly reasonable alternative to writing your own comparator. I'm not sure why back in 2012 was considered old. But it does only do one thing. If you need any more advanced comparisons, then it's not that useful. Implementing do_compare in your class is a pretty simple thing to do. Once done, the do_compare provides the ability to create your own custom comparators that can provide more flexibility.
  6. Your sequence has a handle to the sequencer it runs on, called p_sequencer. You can create a TLM imp there, and a TLM port in your block_cfg_mngr component, and then write the data from the port to the imp. Your sequence can then see the value using its handle to the sequencer.
  7. Under the right circumstances, UVM 1.2 can report an error such as this one: 'uvm_test_top.some_sequence' attempted to raise on 'run', however 'run' is not a task-based phase node! (This is a UVM_PHASE_IMP, you have to query the schedule to find the UVM_PHASE_NODE) This is perplexing because obviously, run is a task-based phase, and one might think that UVM is off its rocker. But the real issue is what is found in the parentheses. This came up while migrating a codebase to 1.2 (not the funnest of tasks, thank you very much). The user had this code working in 1.1d: some_sequence.set_starting_phase(uvm_run_phase::get()); Now, however, that get() call returns a UVM_PHASE_IMP and not the NODE itself. Most users probably don't care about the difference between these, as it has nothing to do with finding RTL bugs. In any case, the above code must be replaced with something like this so that the NODE is used instead: some_sequence.set_starting_phase(uvm_domain::get_common_domain().find(uvm_run_phase::get())); This is not exactly the prettiest solution, but it seemed to work for me. Can anyone suggest a better solution?
  8. predictor / TLM model paradigm

    Agree, although I cannot speak for everyone's design or algorithm that they are trying to verify. There are generally two types of predictors: "white models" and "scoreboards." There are plenty of other names for the same things, but generally they boil down to those two. The difference between them is that the former tries to emulate exactly what the RTL does and perform a cycle-by-cycle comparison, whereas the latter sees stimulus input and predicts what the design will do at some point in the future. Of course, there are varying degrees to which these descriptions hold true. For example, a white model might not handle error cases very cleanly, but generally gets everything else correct. The advantage of the white model is that an error is generated very quickly once the device gets out of sync. The disadvantage, as you said, is that as the implementation matures the white model must be re-coded every time. This, in my experience, is a fatal flaw. But again, it depends on what you're trying to verify. The scoreboard approach, meanwhile, provides flexibility for the design to change over time. It also usually has less "granularity" to its predictions. For example, if a series of memory writes to contiguous addresses are expected, followed by a "DONE" cycle, a less granular scoreboard would not predict each and every memory write transaction and the done transaction. Instead, it would model the memory and only look at the final results once the DONE is seen. This approach gives the design the flexibility to chop up these writes as it sees fit and makes things easier for us. Especially in a constrained random environment, because the variety of stimulus can be arbitrary and outrageous, the white model is generally unacceptable.
  9. So, it was fixed? Was a new version of UVM 1.2 released? Or is it part of UVM 1.3?
  10. At Cavium, we hand edited our version of UVM (egads!) to explicitly allow periods. Although, we felt that asterisks and a few others were probably a bad idea. I know I'll probably get hate mail for abusing the standard like that, but we've got chips to make. Here is a patch that you can apply: Index: 1_2/src/base/uvm_resource.svh =================================================================== --- 1_2/src/base/uvm_resource.svh (revision 329382) +++ 1_2/src/base/uvm_resource.svh (working copy) @@ -1412,7 +1412,8 @@ `ifndef UVM_NO_DEPRECATED begin for(int i=0;i<name.len();i++) begin - if(name.getc(i) inside {".","/","[","*","{"}) begin + // CAVM: Permit periods inside instance names + if(name.getc(i) inside {"/","[","*","{"}) begin `uvm_warning("UVM/RSRC/NOREGEX", $sformatf("a resource with meta characters in the field name has been created \"%s\"",name)) break; end
  11. Come say hi at DVCon 2016 in San Jose on March 2nd at the Real Intent panel! http://dvcon.org/content/event-details?id=199-126
  12. When should we use uvm export

    I've been told that the explanation I have in my book is clear, so I'll just put this here. The diagram in the text helps a lot, but hopefully this will work: When initiating transactions up through a hierarchy, ports can talk to other ports just fine. Imps, though, are always the end of the line. To push transactions down through a hierarchy, TLM provides exports. Exports promote an imp to a higher level in the hierarchy. From another component’s point of view, they look exactly like an imp, but the real imp is buried someplace within the hierarchy. With exports, the external component need not know anything about the lower-level hierarchy. Knowing that I shouldn't pass up a chance to advertise, you can get the book at: http://tinyurl.com/AdvancedUVM????
  13. Randomization of dynamic arrays

    The data array will be created implicitly based on the .size(), if it isn't already created, whenever you randomize the class. However, if you allow the simulator to do this implicitly, then you lose the ability to constrain any of the array's variables, which may or may not be something you care about. If you do, then here's how I do this. First, in my class's new function, I new the array to whatever the largest possible size will be. This is important, and perhaps a little wasteful. Second, I constrain the data.size() as you have above. Third, I can then constrain the data bytes, usually using a foreach constraint. When the class is randomized, the data will automatically re-size itself and constrain the data appropriately. So, like this: class pushk_c extends uvm_object; rand int unsigned data_len; constraint data_len_cnstr { data_len < MAX_DATA_LEN; // some parameter or constant } rand byte data[]; constraint data_cnstr { data.size() == data_len; foreach(data[idx]) { data[idx] == idx; } } function new(string name="pushk"); super.new(name); data = new[MAX_DATALEN]; endfunction : new endclass : pushk_c I would prefer not to self-promote, but I will anyway. I demonstrate this a few times in my book: http://tinyurl.com/AdvancedUVM, or for the e-book: http://tinyurl.com/AdvancedUVM-ebook
  14. For your specific case, you might consider an alternative frontdoor path. With this method, you create a sequence that runs on--well, any sequencer really--and is assigned to the A & B registers using set_frontdoor. This sequence extends uvm_reg_frontdoor and contains an 'rw_info' variable. This rw_info variable holds the information of the transaction. Based on the value of C, the registers can then be manipulated like any other. Consider: uvm_reg chosen_reg; uvm_reg_status status; chosen_reg = (p_sequencer.reg_block.C.SELECTOR.get() == <A_SELECTED>)? p_sequencer.reg_block.A : p_sequencer.reg_block.B; if(rw_info.kind == WRITE) chosen_reg.write(status, rw_info.value[0]); else begin uvm_reg_data_t value; rw_info.value = new[1]; chosen_reg.mirror(status, value); rw_info.value[0] = value; end It's something like that. Not to be too self-aggrandizing, but my book demonstrates this method more clearly: http://tinyurl.com/AdvancedUVM. or the e-book: http://tinyurl.com/AdvancedUVM-ebook.
  15. I discovered this issue when multiple sequences attempted to perform a lock() at the same time (not at all uncommon at the beginning of time when all sequences are launched). In 1.1d, the grant_queued_locks() function has a loop with the following comment: // Grant the lock request and remove it from the queue. // This is a loop to handle multiple back-to-back locks. // Since each entry is deleted, i remains constant This loop is missing in 1.2. In fact, the whole function looks to have been re-written entirely. To start, I noted that if I simply replaced the 1.2 code with the 1.1d code, the test passes. In 1.2, the second begin block in grant_queued_locks() starts out like this: // now move all is_blocked() into lock_list begin uvm_sequence_request leading_lock_reqs[$],blocked_seqs[$],not_blocked_seqs[$]; int q1[$]; int b=arb_sequence_q.size(); // index for first non-LOCK request q1 = arb_sequence_q.find_first_index(item) with (item.request!=SEQ_TYPE_LOCK); if(q1.size()) b=q1[0]; if(b!=0) begin // at least one lock leading_lock_reqs = arb_sequence_q[0:b-1]; // set of locks; arb_sequence[b] is the first req!=SEQ_TYPE_LOCK Note the variable 'b'. It starts out as being the size of a queue, but then changes to an index in the arb_sequence_q variable. If that index happens to be 0, then the 'at least one lock' if statement won't be true. This is a real issue. I'm going to try to find a patch that fixes it, but unless somebody here has an objection, I'll be submitting it to the Mantis page shortly.
  16. Well, like I said, for brevity's sake I excluded the text from the book. But in short: you can use the wait_next_delay() within a sequence to wait a period of time, or you can embed this delay object in your driver and wait for a number of clocks. Maybe I'm being too idealist, but you never want to tie your sequences to a number of "clocks". Sequences have no notions of clocks or interfaces. Drivers see clocks, sequences see time. There are good reasons for this, but that would be a longer conversation. Your issue with regards to falling off of the clock boundary in your driver is a known one, so the book will show you how to avoid that. In short, your driver should try to fetch the next item, but if there isn't one it should call get_next_item() and then wait for a clock cycle. Then it drives. This allows back-to-back items if the sequencer is pushing quickly. After driving an item, you would call get_next_delay() and wait that number of clock cycles. This has probably gone on too long. The bigger answer to your question that I think Linc and I would agree upon is to create a reusable OO solution.
  17. Funny you should ask this. I have a book coming out before the end of the year which has a bunch of UVM recipes, with one on this topic. It's similar to Linc's solution above, except that it is a randomizable class instantiated in the environment that either sequencers or drivers can use. Here's what it looks like. For brevity, I'll just post the class here and not all of the explanatory text that comes with it. class rand_delays_c extends uvm_object; //---------------------------------------------------------------------------------------- // Group: Types // enum: traffic_type_e // Allows for a variety of delay types typedef enum { FAST_AS_YOU_CAN, REGULAR, BURSTY } traffic_type_e; typedef int unsigned delay_t; `uvm_object_utils_begin(rand_delays_c) `uvm_field_enum(traffic_type_e, traffic_type, UVM_DEFAULT) `uvm_field_int(min_delay, UVM_DEFAULT | UVM_DEC) `uvm_field_int(max_delay, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_on_min, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_on_max, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_off_min, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_off_max, UVM_DEFAULT | UVM_DEC) `uvm_field_int(wait_timescale, UVM_DEFAULT | UVM_DEC) `uvm_object_utils_end //---------------------------------------------------------------------------------------- // Group: Random Fields // var: traffic_type rand traffic_type_e traffic_type; // var: min_delay, max_delay // Delays used for REGULAR traffic types rand delay_t min_delay, max_delay; // var: burst_on_min, burst_on_max // Knobs that control the random length of bursty traffic rand delay_t burst_on_min, burst_on_max; // var: burst_off_min, burst_off_max // Knobs that control how long a burst will be off rand delay_t burst_off_min, burst_off_max; // var: wait_timescale // The timescale to use when wait_next_delay is called time wait_timescale = 1ns; //---------------------------------------------------------------------------------------- // Group: Local Fields // var: burst_on_time // When non-zero, currently burst mode is on for this many more calls delay_t burst_on_time = 1; //---------------------------------------------------------------------------------------- // Group: Constraints // constraint: delay_L0_cnstr // Keep min knobs <= max knobs constraint delay_L0_cnstr { traffic_type == REGULAR -> (min_delay <= max_delay); traffic_type == BURSTY -> (burst_on_min <= burst_on_max) && (burst_off_min <= burst_off_max); } // constraint: delay_L1_cnstr // Safe delays constraint delay_L1_cnstr { max_delay <= 500; burst_on_max <= 500; burst_off_max <= 500; } //---------------------------------------------------------------------------------------- // Group: Methods //////////////////////////////////////////// // func: new function new(string name="rand_delay"); super.new(name); endfunction : new //////////////////////////////////////////// // func: get_next_delay // Return the length of the next delay virtual function delay_t get_next_delay(); case(traffic_type) FAST_AS_YOU_CAN: get_next_delay = 0; REGULAR: begin std::randomize(get_next_delay) with { get_next_delay inside {[min_delay:max_delay]}; }; end BURSTY: begin if(burst_on_time) begin burst_on_time -= 1; get_next_delay = 0; end else begin std::randomize(get_next_delay) with { get_next_delay inside {[burst_off_min:burst_off_max]}; }; std::randomize(burst_on_time) with { burst_on_time inside {[burst_on_min:burst_on_max]}; }; end end endcase endfunction : get_next_delay //////////////////////////////////////////// // func: wait_next_delay // Wait for the next random period of time, based on the timescale provided virtual task wait_next_delay(); delay_t delay = get_next_delay(); #(delay * wait_timescale); endtask : wait_next_delay endclass : rand_delays_c
  18. Requirement of objections in UVM

    Some components may have a run-time phase which operates in a "Run forever and wake up when there's something to do" mode, like this simplistic driver: virtual task run_phase(uvm_phase phase); forever begin seq_item_pull_port.get(req); phase.raise_objection(phase); drive_my_request(req); phase.drop_objection(phase); end endtask : run_phase When there are no objections remaining, all of the tasks spawned by all of the component's run phases are killed. This is a feature, not a bug. It just wouldn't do to kill this task's run phase while it was driving a request. And UVM cannot implicitly know how long that will take. Therefore, the objection software pattern is used for all run-time phases.
  19. Here's a high-level overview: When the build phase starts, the test component is created and its build phase is run. As the test creates more components, each component registers with the UVM environment and is added to a list of components whose build phases haven't run yet. When the test's build phase is complete, UVM then goes through the list of components and runs their build phases. Each, in turn, may add more sub components to the list. Wash, rinse, repeat. When the list is empty, then all components have run through their build phases and the UVM build phase is complete. Same thing happens with the connect phase and all the rest, except that in those no more components will be added. So, you see, it's just like a car.
  20. phase jump to function phases

    Based on what the developers have told me, phase jumping to a pre-run phase might "kinda-sorta work so don't do it." Put more simply, phase jumping was not intended to go back to a pre-run phase, so even though the functions may get called again this will lead to issues. Probably they should have more clearly made it NOT work, but whatever.
  21. phase jump to function phases

    I do not believe that it is "legal" to jump back to the function phases. You can jump forward to the check phase, for example, and you can jump back to the reset phase. But you definitely cannot go back and redo the build phase.
  22. Parameterized interfaces

    It's for this reason that I never use parameterized interfaces. I know that probably sounds a bit ridiculous, but I find that it's easier to make an interface wide/large enough to contain all the necessary bits for all types of instantiations. Say you have a 64-bit interface that is sometimes driven at 32-bits. In my drivers and monitors I use configurable fields to tell how wide the bus is and work from there. In the interface, I might create a field called 'bus_width' and assign it such that I can use what is there to control assertions or any other active logic. From there, I can plunk down the interface and let random constraints or configuration options choose how wide my interface is today. It may get a little complicated in places, but I've found this approach to be worth the effort. Most importantly, it eliminates me having to type my_pkg::drv_c#(BUS_WIDTH), my_pkg::agent_c#(BUS_WIDTH), and on and on and on.
  23. uvm sequence

    Totally disagree with the above answer. The best is to use the macros. Religion aside for a second, it all depends on the situation: If you want to create a sequence on a sequencer, randomize it, and send it, then `uvm_do does all that for you. If you want to create a sequence, set some values inside of it, and then send it, then the combination of `uvm_create and `uvm_send is best. If you need constraints, use the _with macros. If it's on another sequencer, use the _on macros. Mix and match as appropriate. As for whether to use macros or not, I agree that it's best to know what they all do under the hood. However, macros always have two benefits: 1. They reduce the amount of code that you have to type, which reduces the number of mistakes you need to debug. 2. If the underlying code of the macros is later improved, you don't need to change any code anywhere at any time. You get the improvements for free.
×